Xerox's laser printers were never "based on Forth".
JAM and Interpress were only conceptually related to Forth, in that they used postfix semantics.
That's like saying that because Java byte code uses a stack, your Android phone is based on Forth. It's not. Stack based architectures (like the Burroughs B5500 and B6500, developed in 1962, which directly inspired Design System => JAM => Interpress => PostScript) have been around a lot longer than Forth (developed in 1970), and simply using a stack doesn't make a language Forth.
Here's a much more accurate and detailed historical account of the history of Interpress, written by Brian Reid:
Gaffney came to E&S from graduate school at the University of
Illinois, where he had used the Burroughs B5500 and B6500 computers.
Their stack-oriented architectures made a big impression on him. He
combined the execution semantics of the Burroughs machines with the
evolving Evans and Sutherland imaging models, to produce the Design
System. Like all successful software systems, the Design System slowly
evolved as it was used, and many people contributed to that evolution.
John Warnock joined Xerox PARC in 1978 to work for Chuck Geschke.
There he teamed up with Martin Newell in producing an interpreted
graphics system called JAM. "JAM" stands for "John And Martin". JAM
had the same postfix execution semantics as Gaffney's Design System,
and was based on the Evans and Sutherland imaging model, but
augmented the E&S imaging model by providing a much more extensive
set of graphics primitives. Like the later versions of the Design
System, JAM was "token based" rather than "command line based", which
means that the JAM interpreter reads a stream of input tokens and
processes each token completely before moving to the next. Newell and
Warnock implemented JAM on various Xerox workstations; by 1981 JAM
was available at Stanford on the Xerox Alto computers, where I first
Jerry Mendelson's Laser Lovers post  reads "A FORTH like graphics/printing language was developed by, among others, John Warnock before coming to Xerox/PARC." Mendelson quoting Forth as the first inspiration here made me believe they knew of and used it. But admittedly "based on" is a bit much.
Funnily enough, the first sentence of the Wikipedia page about Interpress literally says: "Interpress is a page description language developed at Xerox PARC, based on the Forth programming language  and an earlier graphics language called JaM."
The  reference is to: Harrington, Steven; Buckley, Robert R. (1988). Interpress, the Source Book. Brady Book. p. 18. ISBN 0-13-475591-X.
Unfortunately, I can't find a copy of it online (wouldn't it be deliciously ironic to find a PDF rendering of the Interpress Source Book? ;), so I can't check if the citation literally says Interpress was based on Forth, but I've asked Brian Reid if he had a copy or knew a better citation. I'll update the Wikipedia page if I get any more information.
I really love Brian's first-hand account of the origins of PostScript that he posted to laser-lovers!
If the Helvetica font can have a movie all about it, I think the story of PostScript deserves a whole miniseries! "showpage: the PostScript Documentary"
Brian Reid is also known for Scribe, the Usenet Cookbook, and is well respected enough in computer security circles that Kevin Mitnick impersonated him on the phone to gain people's trust! ;)
Brian's brother Glenn Reid was also very active in the PostScript world, he worked for Adobe (Illustrator), Apple (iMovie) and Fractal Design (Painter, Dabbler, Poser), and NeXT (Interpersonal Computing).
Around 1990, Glenn Reid independently designed and developed a delightfully original "Font Appreciation" app for NeXT called TouchType, which decades later only recently somehow found its way into Illustrator. Adobe even CALLED it the "Touch Type Tool", but didn't give him any credit or royalty. The only difference in Adobe's version of TouchType is that there's a space between "Touch" and "Type" (which TouchType made really easy to do), and that it came decades later!
Illustrator tutorial: Using the Touch Type tool | lynda.com: https://www.youtube.com/watch?v=WUkE3XLw_EA
SUMMARY OF BaNG MEETING #4, July 18, 1990: https://ftp.nice.ch/peanuts/GeneralData/Usenet/news/1990/_CS...
TOUCHTYPE Glenn Reid, Independent NeXT Developer
The next talk was given by Glenn Reid, who previously worked at both NeXT and Adobe. He demonstrated the use of his TouchType application, which should prove to be an enormous boon to people with serious typesetting needs.
TouchType is unlike any other text-manipulation program to date. It takes the traditional "draw program" metaphor used by programs like TopDraw and Adobe Illustrator and extends it to encompass selective editing of individual characters of a text object. To TouchType, text objects are not grouped as sequences of characters, but as individually movable letters. For instance, the "a" in "BaNG" can be moved independently of the rest of the word, yet TouchType still remembers that the "a" is associated with the other three letters.
Perhaps the best feature of this program is the ability to do very accurate and precise kerning (the ability to place characters closer together to create a more natural effect). TouchType supports intelligent automatic kerning and very intuitive, manual kerning done with a horizontal slider or by direct character manipulation. It also incorporates useful features such as sliders to change font sizes, character leading, and character widths, and an option which returns characters to a single base line.
TouchType, only six weeks in development, should be available in early August, with a tentative price of $249. BaNG members were given the opportunity to purchase the software for $150.
1) EPS (which the article just mentioned) is part of Adobe's Document Structuring Convention, which solved the problem of skipping and reordering pages.
>Every DSC-compliant document is indicated by having the comment %!PS-Adobe 3.0 as the first line. This comment is a flag to indicate that the document is compliant. You should never use this comment unless your document really is DSC compliant. There are many other parts to proper DSC. A document which follows the DSC can be manipulated in many ways. In particular, postprocessors can shuffle the pages, print two or more pages on a side, and so on. The printer drivers from some notable companies do not follow the DSC, and their PostScript documents are, therefore, impossible to work with once they've been generated.
2) Inefficiently skipping pages (or being slow) is NOT the main problem with PostScript's Turing completeness!
>Security Issues: In early 2017, Microsoft took steps to prevent exploits based on Word's import "filter" for EPS. Based on the assessment that "EPS files are a legacy format that has largely fallen out of favor in today’s ecosystem", Microsoft disabled the import filter by default in its April 2017 security update for Office. According to PPTools post I can no longer insert EPS graphics into PowerPoint, this update applied to Office 2010, 2013, and 2016/365. Import of EPS files into Office 2007 has been disabled since 2015. The stand-alone Office for Mac is not affected.
>A Jan 2017 post from The Register, We don't want to alarm you, but PostScript makes your printer an attack vector, publicizes the vulnerability of networked PostScript printers. The potential for damaging exploits based on communication between computers and PostScript printers has been understood for years, but was not a serious threat before PostScript printers were networked and accessible over the Internet. See also PostScript page from Hacking Printers.
>By now you can probably see the fundamental philosophical difference
between PostScript and Interpress. Interpress takes the stance that the
language system must guarantee certain useful properties, while
PostScript takes the stance that the language system must provide the
user with the means to achieve those properties if he wants them. With
very few exceptions, both languages provide the same facilities, but in
Interpress the protection mechanisms are mandatory and in PostScript
they are optional. Debates over the relative merits of mandatory and
optional protection systems have raged for years not only in the
programming language community but also among owners of motorcycle
helmets. While the Interpress language mandates a particular
organization, the PostScript language provides the tools (structuring
conventions and SAVE/RESTORE) to duplicate that organization exactly,
with all of the attendant benefits. However, the PostScript user need
not employ those tools.
I don't think of Bézier curves as implicit in any way. There's an explicit equation for the curve, as a parameter of "time" (0 at the beginning of the curve, 1 at the end, increasing monotonically in between).
The gist of a Bezier curve is "Take weighted averages, repeatedly".
For beziers you might explicitly evaluate them at time t if you're doing animation of implicitly evaluate them at position x,y if you're rendering them.
It would make the already excellent article even better if you simply linked to the relevant Wikipedia page.
Similarly, for 3D, you really want parametric surfaces, which NURBS are.
For the readers:
The key difference: to plot an implicit curve, you need to solve an equation, which is a whole another can of worms. Solving equations is hard. For high-degree equations, there are no formulas for solutions!
To plot a parametric curve, you just need to plug numbers into some formulas. That's much easier!
The problem discussed in the article is that even if you have an explicit parametrization, answering questions like "Where does this curve/surface intersect this ray?" involves equation-solving. And that's hard. As would be something like "represent this curve/surface with segments/triangles of uniform size".
TL;DR: implicit is something for which you don't have a formula. Bezier and NURBS have easy formulas; they are parametric.
Another thing possibly of value: where Bezier curves come from - automotive design. The machines that were tracing these curves out needed to solve the forward problem (where should the tool head be at time t?), and the designers needed an easy way to change the shape of the curve. The Bezier splines were (and still are) a great solution to that problem.
For that reason, it's easy to convert Bezier splines into G-Code for either plotter - or, these days - 3D printer.
Also, sampling the parametric form does give you a polygon (or a polygonal mesh), which is easier to work with. But that's another can of worms.
That's not strictly true - you could choose a numerical algorithm like Newton-Raphson.
3D B-rep modelers will typically solve x,y,z on curve that way because many of the curves won't have an implicit form of any reasonable low degree. In fact, some of them won't have an implicit form at all.
But I thought in this case, perhaps this was a big deal. I thought one of the reasons why 2D graphics and 3D graphics are so different is because of ... is it topology? ... For any arbitrarily complex curve, I think I can give you a 1D parameterization for it. At least to some desired accuracy.
What condition is required of a surface so that you can do the same, with a 2D parameterization?
If you haven't see it already, read this paper by Laurent Boue: "Real numbers, data science and chaos: How to fit any dataset with a single parameter"
Even to draw 2-d graphics the 3-d technology works so much better.
(As an aside, I think this state of affairs is unfortunate, given how much money there is in mobile apps and browsers and the like. The vast majority of the research goes to 3D, and 3D is flashier and prettier work in a sense, but the amount of time we spend reading text on a screen generally dwarfs the amount of time people spend consuming 3D CG entertainment...)
Do you mean vector vs raster?
Finding fun mechanics that add to CPU load is awfully easy. ;)
One of my favorite PS1 games whose designers rightfully said "fuck it, we're just going to draw perspective-incorrect textures" as well as "fuck it, we're just going to simulate newtonian-incorrect physics" in the name of excellent gameplay, is Twisted Metal 2!
Check out the deliciously distorted faces of the buildings along the Boulevards of Paris:
Peter Macon, the guy who voiced Axel and Minion in Twisted Metal, now plays Lieutenant Commander Bortus on The Oriville!
‘Crash Drive 2’ is the closest online game that I've found so far, but its imbalanced upgrade mechanics botch the challenge after a few hours.
AFAIK, GDI was part of Windows since the very first version released in 1985.
The 3D equivalent of vector graphics would be Boundary Representation , which is multiple orders of magnitude harder than 2D vector graphics.
This premise in the article was pretty confusing as I do not think this is universal.
When I think of vector graphics I think of fonts (2d) and star wars cgi (3d vector death stars).
When talking about rasterizers, the modern pipeline of rendering 3d meshes to a 2d back buffer comes to mind.
It took me some time to understand the purpose of this article because I first had to understand that 2d meant vector and 3d was to mean raster in this article.
Typically when we say "raster graphics", we mean graphics already computed as a set of points, like a bitmap PNG file. But here 2D comes in as a set of curves, and 3D comes in as a set of triangle edges. Both are mathematical descriptions that have to be "rasterized", e.g. transformed from the edge representation to the raster one.
That's kind of the article's point. It explains how it's an apples to oranges comparison. If that's obvious to you, you don't need to read it.
I'm answering with in good faith and with good will to you :)
Most of the article doesn't seem relevant to the headline, and as such I found it difficult to understand where the article was leading and why I should be interested to read on.
If the headline was something like "Parallel histories of 2D and 3D vector graphics" then I would have understood what I was committing to read and been more satisfied with the overall experience.
Obviously, the article has found an audience so perhaps I am wrong and you can safely ignore this thought ;-)
An interesting way to look at this technology from a startup perspective. From the new industry of digital printing they found a business model based on the loophole that fonts are uncopywritable and could be monitized.
I think it’s just legacy. For many decades before modern GPUs, 2D graphics was done in certain ways. People accumulated experience, wrote books, taught stuff to students in university.
Then modern GPUs happened, but initially only gamers and 3D graphics professionals had them, so 2D programmers couldn’t care less, their users just didn’t have hardware for GPU-based rendering. I think it was only around 2008-10 when non-game developers could finally use GPUs without sacrificing users base, and even at that time it meant “DX9 GPU” so no geometry or compute shaders.
I think people will adjust after a while. Professors teaching 2D graphics will stop talking about moveto/lineto and start talking about submitting triangles and writing shaders.
The 2D graphic system in the future will of course submit triangles and make use of shaders, but it is hard to imagine that we will be directly working with triangles. And we have taken the subtly broken scenary of raw 3D triangles granted (so far) but not tolerated jaggy 2D strokes. To resolve this dichotomy either 3D should go continuous (e.g. implicit surfaces) or 2D should go discrete as you suggest; the latter seems more unlikely.
“The future is already here, it's just not evenly distributed” © William Gibson.
It already works this way on modern Windows: WPF, UWP, and Direct2D already use GPUs.
> it is hard to imagine that we will be directly working with triangles
Most people doing 2D graphics on CPU don’t moveto/lineto either, they use higher-level libraries for that. To be good at it, you have to understand, to some extent, couple underlying abstraction layers. That’s why universities are teaching these basics. The basics for GPU-rendered 2D graphics are totally different, and until very recently only 3D graphics professionals cared about them.
> either 3D should go continuous (e.g. implicit surfaces)
You don’t necessarily need implicit surfaces. If you have a GPU ≤ 10 years old, you can convert 2D vector stuff to triangles on the fly.
Here’s a simple example that renders a polyline: https://github.com/Const-me/vis_avs_dx/blob/master/avs_dx/Dx... It cuts a few corners e.g. doesn’t handle self-intersecting polylines particularly well if you have transparency, but it’s all fixable, that was an unpaid hobby project.
Often you can do the same with Bezier/Nurbs, see tessellation stages in D3D11 docs.
Oh, you are right. Even DirectWrite makes use of GPUs (and not just texture atlas) when you use a correct API.
> [...] That’s why universities are teaching these basics. [...]
I agree if you meant so. It is unfortunate that most university courses are not concerned with fundamentals or only concerned with outdated fundamentals.
I don’t think they’re that bad. For things that are relatively stable, like compilers, file systems, OS kernels, or relational databases, universities are doing OK job.
It’s just that GPUs have been evolving too fast for the last decade. They drive evolution of everything that runs on them. GPUs have become ridiculously powerful (in my desktop PC, GPU is 25 times faster than CPU doing math on floating-point numbers), so more and more stuff does run on them. Initially they only run 3D game engines, then 2D graphics, now also a lot of high-performance computing. When stuff gets ported from CPU to GPU it changes a lot due to very different architecture.
"Curves to straight line segments" and "polygon to triangles" was done in CPU, everything else in GPU. It was the year 2005 or so.
Edit: I really enjoyed the article and its information, but I think I missed the part where it answered the question in the title.
Doesn't really make a difference to the point in the article. Both are "vector" devices. But I'm curious about the history.
Computer-controlled plotters apparently date back to 1953, but the online source for that is iffy. The Atlas Missile Guidance Computer had some, which would be around 1957. Vannevar Bush's differential analyzer had plotters, but that was mechanical, not electronic.
Sketchpad, on the MIT TX-2, was the beginning of what we'd recognize today as interactive graphics.
There were many early military systems. The USAF really wanted graphic displays. Preferably big ones. SAGE, the Iconorama, and other triumphs of money over the problem followed.
By the late 1960s, most of the typesetting companies had some kind of phototypesetting machine. Fonts were typically on glass disks and printed onto photographic paper by firing a strobe lamp at the right time. The next stage was using a stroke-writing CRT to draw the characters. Usually the CRTs for these were about one line high but a page wide. Harris-Intertype had that approach working before 1970, long before Metafont.
The font problem in that era was "given this outline, fill it in with minimal stroke movement". Strokes had a width, so you had the usual problem of machine tools - all inside corners are rounded. If the beam is too small, it takes too many strokes to do the fill, and if it's too big, there's too much corner rounding. I knew of a program called Procrustes which did that job for the Harris-Intertype Phototronic around 1970.
Do you count differential analyzers as computers?
If so, then yes.
IIRC, some of the input devices to such machines were x/y "pointer" tables - imagine a flatbed plotter, where you moved the part where the pen would go, in order to input cartesian coordinates (or a continuous value I suppose, composed of x/y values?)...
Prior to oscilloscopes (but after differential analysers) - the Williams Tube was used as a memory system. It could be considered a form of bitmap display, except you'd have to only use it in a dark room (and no camera flashes, dammit!):
Before plotters or oscilloscopes, there were harmonic oscillators which could perform Fourier transforms and the reverse operation and print the result on a piece of paper. It's a vector device too, albeit a mechanical one, not digital or even electronic.
I thought it used constructive solid geometry?
Note that OpenSCAD can also export a CSG datastructure of a model. The OpenSCAD module in FreeCAD module uses this generate B-rep based solid models using their OpenCascade kernel.
This raises the point of difference between OpenSCAD and Blender. In OpenSCAD the mesh is mostly a secondary thing, an artifact of how the internals are implemented - the "primary" model the user modifies is the code itself. Where as in Blender, the mesh is the primary model being modified by user.
Although the distinction is a bit blurred, as Blender has some support for NURBS these days. And one can have non-destructive operators ("modifiers") such as subdivide/array/mirror etc.
Of course you can also define surfaces using Bézier curves in Blender.
There are algorithms for rendering smooth surfaces, but for a series of reasons they tend not to be used for real-time rendering.
> The first graphics devices to interact with computers were “plotters”,
The first graphics devices used as such on digital computers were the visible debugging CRTs driven from the Williams tube control signals, and of course "ASCII" art (before ASCII) on printers. George Dyson's book on the Johnniac has a lot of images of one-dimensional cellular automata from the dawn of the von Neumann machine. The TX-0 had a point-plotting CRT and also, I believe, the first pen plotter.
> The only major problem with PostScript was its Turing-completeness — viewing page 86 of a document means first running the script for pages 1-85
As Don pointed out, that isn't the major problem with it being Turing-complete. Aside from the frequent security holes that have occurred, I recall a four-line PostScript .signature in the 1990s that computed an IFS fractal for 60 seconds and then printed it; it would plot more or fewer points according to whether your printer's PostScript was faster or slower, so it might not print out the same way twice even on the same printer. Writing a PostScript document that renders more or fewer pages is just as easy. And the common kinds of manipulations you'd like to do with printable documents are unnecessarily difficult with PostScript because of the Turing-completeness — if you implement them by assuming documents that claim to respect DSC will do so, your tools will break at random on real-world documents whose DSC compliance is imperfect. Consider the problem of concatenating two PostScript documents (pscat), each of which might have redefined "moveto" in its prologue, or defined "M" as a shortcut version of "moveto" with similar but subtly different meanings; or, worse, interspersing their pages, or displaying reduced images of their pages (psnup, psbook, mpage) on the same page. In theory you could hack together some kind of sandbox that makes this stuff work reliably, but in practice the solution is to distill the documents to PDF using GhostScript.
(For an overview of DSC, see the excellent Brian Reid paper Jasper and Don posted links to: https://tech-insider.org/unix/research/1985/0301.html)
Even my favorite PostScript test document (from Network PostScript Printer Installation Illustrated Through Interpretive Dance) prints a rectangle that is quite rarely the same size on different printers; it's even different sizes on the same printer using different paper sizes:
clippath stroke showpage
This is not really true. The original motivation for PostScript Distiller (before PDF stopped being a subset of PostScript) was to make a way to get documents onto computers that were too wimpy to run a full PostScript interpreter, like IBM PCs and Macintoshes. The problem was not, as I understand it, primarily that it was slow, and certainly not that it was Turing-complete (those machines ran lots of Turing-complete languages, after all) but that it required a lot of code and a lot of memory.
> It threw out the programming language — but the graphics technology stayed the same.
This was only true at the very beginning. PDF (and SVG) has evolved quite a bit over the years. Notably, PDF and SVG support alpha-blending and gradients, while PostScript didn't, last I checked.
> Bezier curves do exist in 3D, where they are known as B-splines, and they are computable, but they have the drawback that they limit the ways you can connect your mesh together.
This is almost entirely wrong. B-splines as such are one-dimensional, or two-dimensional if you consider the abscissa a dimension, though you can of course construct parametric curves of any dimension each of whose coordinates is a B-spline in that dimension. They are not, in general, computable (and neither are NURBS surfaces, which I think are what is meant), but all of Bézier curves, B-splines, and NURBS surfaces are easy to compute particular points on, if their parameters are computable. B-spline surfaces in 3-D don't, as far as I know, have anything particular to do with Bézier surfaces, except in the sense that Catmull–Clark is a Bézier-like algorithm that converges to a cubic B-spline surface, and of course B-splines (in the sense of basis splines) and the Bernstein polynomials used by Béziers are just alternative bases spanning the same space of polynomials, so you can freely convert a parametric polynomial surface of a given degree between the two representations. Nor do any of Bézier curves, NURBS surfaces, or B-splines impose constraints on surface topology.
> we call approaches like Bezier curves and Hobby splines implicit curves, because they are specified as a mathematical function which generates the curve
This is almost entirely wrong; Bézier curves and Hobby splines are parametric, not implicit, and "specified as a mathematical function which generates the curve" is not what "implicit curve" means. The truth is close to the opposite: an explicit function is a mathematical function which generates the curve, while an implicit function just tells you which side of it you're on.
> The 3D equivalent of an implicit curve is an implicit surface, made up of basic geometric primitives like spheres, cylinders and boxes.
This is very much not what an implicit surface is. The author has confused CSG ("constructive solid geometry"), which is making things up of basic geometric primitives like spheres, cylinders, and boxes, with implicit surfaces, which are almost, but not quite, completely unlike CSG.
An implicit surface is the solution set of an equation in three unknowns. If you want a sphere, that's easy: (x - 17)² + (y - 22)² + (z - 84)² = 39, for any values of 17, 22, 84, and 39. But a box? That's gonna be a bit rough: (x² - 1)(y² - 1)(z² - 1) = 0 contains a box, but its sides extend to infinity. It's definitely feasible to hack the equation to clip them, but it's not what implicit surfaces are good at, and I think some of the obvious approaches make life a lot harder for solvers.
A much more typical example of an implicit surface, in my book, is Jim Blinn's metaballs — something like (x - 17)² + (y - 22)² + (z - 84)² + (x - 17)² + (y - 28)² + (z - 84)² = 39, which gives you a blobby barbell-looking thing. Although you might think that it should give you two spheres, actually no patch of its surface is a patch of the surface of a sphere, because the proximity distorts both spheres into non-sphericity.
Or, consider a paraboloid, the shape of a satellite dish or a Newtonian telescope mirror: x² + y² - z = 0 (though that particular example is easy to solve as an explicit heightfield, so maybe it's not the best example of implicit modeling.) That's gonna be a real pain to model with spheres and cylinders. You're never gonna get there if you're trying to get your telescope mirror to focus a clean image.
> 2D graphics started and maintained forward momentum around these implicit curves, by near necessity in their use in modelling human letterforms and glyphs. The hardware and software to compute these paths in real-time was expensive
This is wrong; although some recent systems like Dan Amelang's Gezira do in fact compute paths from letterforms for every frame, none of METAFONT, GDI, and X11 do this, normally. Instead, all of them rasterize the letterforms early, then use the pre-rasterized letterforms over and over. PostScript is in kind of a weird gray area, but I'm reasonably sure that typical PostScript implementations do that most of the time as well, although you can rotate text and blow it up to huge sizes, which necessarily requires re-rasterization at some point, and of course, ever since Type 2, you have "charpath".
> Surfaces like Catmull-Clark and NURBS allow for arbitrarily connected meshes to empower artists, but this can lead to polynomials greater than the fourth degree, which tend to have no closed-form solution
This is not, as far as I know, wrong, but is misleading; if you are computing quartic parametric surface patches, you are not going to do it by using Ferrari's closed-form solution to quartic equations. You are going to do it by subdividing triangles, just as you do for higher-order equations, because tabulating the (x,y,z) values for a lattice of (u,v) values is pretty simple actually.
> The difficulty, of course, is that only a legitimate genius like Quílez can create Surfer Boy. There’s no existing tooling for signed-distance field geometry, it’s all code
This is not even wrong; it's incoherent. ImplicitCAD is "all code", sure, but does that make it "no existing tooling"? And, not to bag on IQ here, who is legitimately super awesome, but SDF raymarching is not actually that hard to implement. You don't have to be a genius. You probably shouldn't take the word of someone who confuses CSG and implicit modeling about it, either.
Also, it seems a bit amiss to bring up IQ and Shadertoy in the same paragraph, but not mention that IQ wrote Shadertoy! (Though of course it's a team effort now.)
Anyway, I thought it was a really interesting article, and it introduced me to a bunch of research I hadn't seen. So it was worth reading. Just be careful about the errors!
The story I wanted to tell was to cover the large, broad developments in the industry that caused our solidification of the respective image models in both 2D and 3D. Perhaps I omitted a bit too much, and I'm certainly framing it with modern eyes, but I do not believe any of this to be revisionist.