Hacker News new | past | comments | ask | show | jobs | submit login
Famous Graphics Chips: Microsoft’s Talisman (computer.org)
64 points by ingve 51 days ago | hide | past | web | favorite | 24 comments



> So, tiling wasn’t a good choice for PCs and workstation, but as mentioned, ideal for power starved mobile devices.

It's fascinating to me how some technologies that seemed at one time like also-rans suddenly became hugely relevant when mobile became A Thing. ARM, for instance, started out in the mid-'80s aimed at the desktop market (the "Acorn RISC Machine": see https://en.wikipedia.org/wiki/Acorn_Computers#New_RISC_archi...), got clobbered when the market coalesced around Wintel, and simply to survive retreated to one of the few niches where their chips had a clear advantage over Wintel's: applications that required low power consumption.

That was a tiny market for a long time, but then mobile became A Thing and Intel, who were focused as they always had been on performance over efficiency, missed the shift completely. So ARM, which had been diligently focusing on efficiency for years, went from being a footnote to history to being one of the key architectures of modern computing.


It is also interesting that Intel (after buying the Strong ARM line from DEC) were the manufacturers of some of the fastest ARM-based mobile chips in the late 90s and early 00s but sold the line to Marvell in 2006 right before the mobile explosion.


Another example from that track they already failed twice at memory tagging support, while SPARC's became mainstream on Solaris and ARM's will become a requirement for future Android versions.



Tiling was also "secret sauce" of Nvidia Maxwell GPUs


This reminds me of the Atari 7800's architecture. The screen is composed of horizontal "zones" with variable heights. Each strip renders one or more objects (sprites or tiles), stopping the CPU to do a DMA transfer on each scanline. A lot of this is controlled by display lists in RAM, but some settings (palettes for example) have to be changed on-the-fly via interrupts. And sprites invariably span multiple zones, which is another PITA.


It's an interesting design in how it straddles between old and new rendering pipelines. It's effectively allowing the definition of 3D sprites that can be posed, imaged, tweaked and placed with the first half done eagerly and the latter on-demand during frame output. As pointed out, the challenge would be determining how complex a scene can be produced per frame and staying under the limit as there doesn't seem to be a way of varying frames/sec.


The whole thing sounds insanely complicated, and unclear that the complexity could ever be hidden behind a standard OpenGL-like 3D API.


Modern GPUs do way more magic under the hood. BTW, that talisman uses tiled rendering strategy that's now implemented in many modern GPUs: https://en.wikipedia.org/wiki/Tiled_rendering

Also modern 3D APIs, Metal, DX12, Vulkan, are very OpenGL-unlike. They don't hide much complexity, they embrace and expose it.


What was insane about talisman is that it tried to render the tiles "just in time"

Other tiled renders (both powervr at the time and all modern mobile GPUs) would write the color tile back to an in-memory color buffer and scan out the image to screen from that. They also had fall-back paths if a game does something that can't be expressed in a pure tiled way (like many post-processing effects).

They would flush both the tile color and tile depth out to in-memory color and depth buffers buffers and then pull those back in to restart tiles later. Nvidia's Maxwell and later desktop GPUs take this to the extreme and flush those tiles in-and-out so many times per frame.

Telisman doesn't (can't?) do this. It tried to render all the tiles it needs for scanning out the next 32 lines into a small on-chip buffer and scan those out while rendering the next set of tiles for the next 32 lines.

This saves on memory bandwidth but the whole approach falls apart if you try to render too many polygons. Instead of frames being late, parts of the screen disappear.


Yes, and this kind of issue comes up a lot in computing when trying to find deep optimizations: If you make a processing pipeline that drops information and reduces quality as it goes along, you can always turn that into a speed-up, but some features become hard and others become impossible. So in the end most pipelines cohere into an unexciting but generalized-enough solution that retains some extra data, while the cool hacks and approximations are left as special cases used to get an effect in specific applications(and the history of real-time 3D is, in essence, a history of progressively fewer cool hacks and more generalizations).


Consider that this work started in 1991 and OpenGL wasn’t a thing outside of SGI until 1992.


Consider that we were already doing GPGPU like programming (in concept at least) on the Amiga with DMA and Agnus in 1987.

https://en.wikipedia.org/wiki/MOS_Technology_Agnus

And before miniGL came to be, we already had Glide on the PC.

https://en.wikipedia.org/wiki/Glide_(API)

Had it not been for miniGL and Carmacks advocacy for OpenGL, it wouldn't have been much of a thing outside SGI.


I still have my Amiga!

SGI started with their proprietary IrisGL and that became OpenGL at the behest of other Unix vendors at the time.


OpenGL would have been better if that role had been taken by Iris Inventor.[0]

Instead it became a rite of passage for every OpenGL beginner to reimplement Inventor on their own way, either from scratch or playing Tetris with different kinds of libraries.

Meanwhile, other graphical SDKs always provided such APIs.

By the way, the best 3D API back then was Renderman, the genesis for modern shading languages.

https://www.amazon.com/RenderMan-Companion-Programmers-Reali...

[0] Yes there was Open Inventor afterwards, but its story is a bit convoluted.


It may have made more sense back then, and maybe there was a belief that the complexity was worth it to push a new standard like API. Innovation is hard, and it's worth respecting all attempts.


The point where it all falls apart if the application falls behind for one frame is a showstopper for game development.


PS1 had an interlaced mode where IIRC you had to keep up with 50/60Hz, because there were only two screen pages, and every other vertical retrace would use one of them. You're rendering to buffer A while B shows, but if you're not done with A at the vertical retrace, A will start showing while you are still rendering to it. That's problem 1. Problem 2 is, can you render the next frame to B in the less-than-a-frame time you have left before B shows again? If you discard the the current partial frame and wait, the old B will show, effectively looking like a backwards movement glitch.

You would be right in that few games dared use it for gameplay. Gran Turismo (Japan import only, maybe?) had a hi res mode where it used this interlaced mode, and had to simplify the environment a lot compare to the regular game. Watching that thing run at 60Hz in high resolution was breathtaking.


It used to be the norm in game development to sync with the refresh rate of the CRT.


Is it not still the norm? (Though, with an LCD screen of course, not with a CRT.)

Many of my games include some form of vsync option, I think. (E.g., Factorio calls it "Wait for VSync", 7 Days to Die "VSync", Zandronum (a modern Doom port) "Vertical Sync").

My understanding was that it was a trade-off between smooth graphics at the display's framerate, or a faster framerate with potential tearing (on-screen artifact). With it off, you'll "see" something as soon as the currently rendering frame renders a pixel containing whatever is relevant. But with Vsync on, you'd need to wait until the next frame renders (and for it to hit a relevant pixel), introducing a slight delay, and tearing when the new frame overwrites the old frame at the point where the display is currently "drawing"/displaying.

(Though it does seem like it is defaulted to "off". It's my understanding the pro gamers like the lower latency over the visual artifacting.)


This is not the norm; wait for vsync means the frame will be displayed during the vblank; if it takes you 6 frames to draw, then you have a frozen image for 6 frames.

GP was talking about rendering each frame in lockstep with the vertical refresh; there is no option to wait, you will instead get a partially drawn frame (TFA mentions the runway disappearing in flightsims; that's a common artifact because much of the geometric complexity was on the ground, so when you are landing, the complexity would spike, the engine would lag behind and some things just wouldn't get drawn in time).


But at least you had a frame buffer to handle those few cases in any given game where you weren't quite fast enough, or where the OS goes out to lunch briefly.

Talisman was a pretty goofy idea in retrospect (and seemed goofy to me at the time as well.) Completely non-scalable. It was an exercise in optimizing things that were already fast, which is the last thing you want to do in game development where you're going to be judged on your slowest frames.


Today's AR and VR applications have to meet the frame rate (typically 90 FPS) every single frame or they will glitch. Talisman's temporal image coherence approach is not unlike timewarp / Asynchronous Spacewarp on the Rift and Motion Smoothing on SteamVR. These approaches halve the frame rate generated by the app and synthesize a frame every second frame to keep up the 90 FPS.


I can't remember if talisman passed whql. But I don't think it could.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: