It's fascinating to me how some technologies that seemed at one time like also-rans suddenly became hugely relevant when mobile became A Thing. ARM, for instance, started out in the mid-'80s aimed at the desktop market (the "Acorn RISC Machine": see https://en.wikipedia.org/wiki/Acorn_Computers#New_RISC_archi...), got clobbered when the market coalesced around Wintel, and simply to survive retreated to one of the few niches where their chips had a clear advantage over Wintel's: applications that required low power consumption.
That was a tiny market for a long time, but then mobile became A Thing and Intel, who were focused as they always had been on performance over efficiency, missed the shift completely. So ARM, which had been diligently focusing on efficiency for years, went from being a footnote to history to being one of the key architectures of modern computing.
Also modern 3D APIs, Metal, DX12, Vulkan, are very OpenGL-unlike. They don't hide much complexity, they embrace and expose it.
Other tiled renders (both powervr at the time and all modern mobile GPUs) would write the color tile back to an in-memory color buffer and scan out the image to screen from that. They also had fall-back paths if a game does something that can't be expressed in a pure tiled way (like many post-processing effects).
They would flush both the tile color and tile depth out to in-memory color and depth buffers buffers and then pull those back in to restart tiles later. Nvidia's Maxwell and later desktop GPUs take this to the extreme and flush those tiles in-and-out so many times per frame.
Telisman doesn't (can't?) do this. It tried to render all the tiles it needs for scanning out the next 32 lines into a small on-chip buffer and scan those out while rendering the next set of tiles for the next 32 lines.
This saves on memory bandwidth but the whole approach falls apart if you try to render too many polygons. Instead of frames being late, parts of the screen disappear.
And before miniGL came to be, we already had Glide on the PC.
Had it not been for miniGL and Carmacks advocacy for OpenGL, it wouldn't have been much of a thing outside SGI.
SGI started with their proprietary IrisGL and that became OpenGL at the behest of other Unix vendors at the time.
Instead it became a rite of passage for every OpenGL beginner to reimplement Inventor on their own way, either from scratch or playing Tetris with different kinds of libraries.
Meanwhile, other graphical SDKs always provided such APIs.
By the way, the best 3D API back then was Renderman, the genesis for modern shading languages.
 Yes there was Open Inventor afterwards, but its story is a bit convoluted.
You would be right in that few games dared use it for gameplay. Gran Turismo (Japan import only, maybe?) had a hi res mode where it used this interlaced mode, and had to simplify the environment a lot compare to the regular game. Watching that thing run at 60Hz in high resolution was breathtaking.
Many of my games include some form of vsync option, I think. (E.g., Factorio calls it "Wait for VSync", 7 Days to Die "VSync", Zandronum (a modern Doom port) "Vertical Sync").
My understanding was that it was a trade-off between smooth graphics at the display's framerate, or a faster framerate with potential tearing (on-screen artifact). With it off, you'll "see" something as soon as the currently rendering frame renders a pixel containing whatever is relevant. But with Vsync on, you'd need to wait until the next frame renders (and for it to hit a relevant pixel), introducing a slight delay, and tearing when the new frame overwrites the old frame at the point where the display is currently "drawing"/displaying.
(Though it does seem like it is defaulted to "off". It's my understanding the pro gamers like the lower latency over the visual artifacting.)
GP was talking about rendering each frame in lockstep with the vertical refresh; there is no option to wait, you will instead get a partially drawn frame (TFA mentions the runway disappearing in flightsims; that's a common artifact because much of the geometric complexity was on the ground, so when you are landing, the complexity would spike, the engine would lag behind and some things just wouldn't get drawn in time).
Talisman was a pretty goofy idea in retrospect (and seemed goofy to me at the time as well.) Completely non-scalable. It was an exercise in optimizing things that were already fast, which is the last thing you want to do in game development where you're going to be judged on your slowest frames.