It's neat to put graphics on your old DEC terminals, it's cool to have as a hack. But re-implementing it into new tech?
I dunno. I'd really like to see an mmap'd YUV or RGB buffer in the terminal being read from a pipe/file/memory area, because I feel like that might be a little more efficient. Or, hell, what if we got a straight up GL context? That could seriously be interesting.
The page focuses on showing the limits of it, but there are lots of small little cases where being able to output graphics to the terminal in otherwise predominantly textual applications is great, but not great enough to justify a more complicated solution.
> being read from a pipe/file/memory area
And now you've lost network transparency to gain efficiency that wasn't needed in the first place.
The latter had some merits because your whole system was a terminal, but I don't quite get what this gets us in a system where you can have as many graphics contexts as you want anyway. Especially if the rest of the window where this graphics is embedded is ancient constrained technology.
I'm not a big fan of the Bell school of tech, but I definitely understand Rob Pike's source of pride in testifying that he never wrote a cursor-controlled terminal app...
Because that is the main benefit: Being able to inline graphics in an otherwise mostly textual interface.
There are a great many situations where I want to work on the command line, and I mostly want to work on text, but occasionally would like to output graphics. X forwarding is great and lets me opt to spawn an image viewer instead, but it's awfully limiting. For starters, it means you have text one place and the image somewhere else, which is fine if they are fully separate, but annoying if the image and text are providing context to each other and especially if you want multiple images. Being able to augment tool output with simple graphics, to me, is of similar utility to syntax highlighting and other colour use in the terminal.
E.g. consider being able to easily list a directory on a remote server and inline thumbnails without having to start a file manager via X forwarding.
I'd loathe for people to start writing graphics heavy applications as terminal apps with graphics, or for people to start requiring a pointing device for them, but there's a lot of small uses that can make the command line a lot more pleasant.
Back when Geocities was a thing, I often regretted that fact, yes.
But at least the rest of the web page isn't as limited. In the context of an advanced layout engine, images make much more sense. You also have more options for interactivity, like e.g. resizing/zooming.
In an Oberon-ish interface where the image would be a more general object, an "image" teletype could probably propser better than if you just squeeze pixels into a character raster. Which always reminds me of some of the hacks one did with 8 bit computers.
And yeah, I've played with terminal emulators that could do this. In a shell session, "cat"-ing an image is only rarely enough, so I'll have to go to the image viewer anyway.
Did he use his own terminal at that time ?
I'm sure that blitting sprites pixel-by-pixel to the screen is much faster without hardware acceleration.
Hardware acceleration only makes sense if you're doing vector/3D graphics or running complex video decoding algorithms.
Wikipedia on Sixels: https://en.wikipedia.org/wiki/Sixel
(also, I can't run the new betas, probably because they're not properly signed)
That should be supported by more terminals.