Most commonly a sprite is represented as a 2d array of pixels that you for X, for Y over and use math or branching to blend on to the screen. But, that's a lot of reading and writing, and a lot of the math ends up doing nothing because a lot of the pixels are intentionally invisible.
So, you could do some sort of visible/invisible RLE to skip over the invisible pixels. That's better, but it's still a complicated loop and still a lot of reading pixels.
So, many crazy democoders have decided to write "sprite compilers" that read the 2D color array of a sprite and spits out the assembly for the exact instructions needed to write each visible pixel one at a time as a linear instruction sequence with no branching. The sprites are then assembled and linked into the program code as individual functions. I believe they can even exclusively use immediate values encoded inside the instructions rather than reading the colors from a separate memory address. So, rather than read instruction, read data, write data; it becomes a read instruction, write data in two straight lines in memory.
So basically this kind of hack could not be used for a game, then, where interaction is needed?
What's harder is clipping against the sides of the screen. With no branching, there's no way to prevent the sprite from writing past the end of a line/screen (wrapping/mem-stomping). So, there does need to be a test per sprite to detect that case and fall back on a more complicated blitter.
The Allegro game framework had a compiled sprite jitter as a feature early on. So, that would be existence proof of them being used in games :) http://alleg.sourceforge.net/stabledocs/en/alleg016.html
In fullscreen modes, you could also just make your screen buffer bigger than the actual screen by the width and height of your largest sprite.
This only cost one extra AND instruction and allowed the sprites to be clipped to any size rectangular playfield while still maintaining almost all of the speed benefits.
I did extend it to use a full-screen foreground mask that implemented this sort of clipping. I was able to make the mask scrollable which allowed the compiled sprites to appear "behind" fences and other complex shapes with per-pixel accuracy.
It could even be used to mask out individual pixel bits that allowed for fake "lighting" changes with a carefully chosen palette.
(maybe one day..)
However, modern games use GPU acceleration instead of plotting the pixels with the CPU, and most higher languages don't expose the sort of functionality you need to use this trick in the first place.
Some of the stuff would be completely at home as installation art in any top modern art museum. My favorite is https://www.youtube.com/watch?v=XF4SEVbxUdE done in 64kb!
Here's the results with links to productions (most of them have youtube videos by now)
I wonder if youngsters who didn't grow up thinking 1 MHz is a perfectly acceptable CPU speed and that 640 KB is a whole lot of RAM will understand what the fuss is about here...
They won't, and thus they don't understand the value of demos in the first place. But don't blame them, even back in the days I knew many people who could not appreciate demos either.
Look at modern forum discussions on Smartphones, it's full of youngsters comparing specs of their respective phones without grasping at all what they mean. Or maybe you are referring to a highly educated subset of youngsters, but that's very few of them.
Two weeks later I met him again, of course I wanted to continue teaching him. I thought maybe the Z^4+C variant would be a nice step further. Turns out he already had written the Julia version of the zoomer ... on his Android phone, while waiting at the dentist's ... O_o
Now I used to be all about fractals when I was his age, later grew up to be a 4096 byte democoder (around 2000), I was sooo jealous, what I wouldn't have given for a pocket computer that powerful! Lucky kid :)
Aaanyway, apart from sharing this cute anecdote, my point is this. There's some extremely clever young bastards out there. Now there's not many, but they're also not extremely rare. I know a handful, although this particular guy is probably the cleverest right now. They come from all sorts of backgrounds, too. But the important part is not that they're highly educated, but that they're highly educatable, and given the opportunity to develop this. Their little hacker brains are hungry enough :)
Having written their own graphics code, running against CPU-limits (although we hit float precision before it got really slow), I'm sure he'd appreciate some of the awesomeness of stunts on a limited machine like the 8088. In fact one of the earlier graphics I programmed with him was something very similar to the circle interference pattern described in the article (it was mostly his own idea, playing with interference patterns, I just carefully nudged towards the classic demo effect, because I knew it'd result in a really cool effect).
 He already knew how to draw stuff with Processing. He already tried to look online for how the Mandelbrot algorithm works, but couldn't quite wrap his head around it. Missing bit of information turned out to be (a+b)(c+d)=ac+ad+bc+bd, hadn't learned that in school yet. If you explain i as rotation by 90 degrees, the rest becomes quite intuitive, visually. We also took a quick skim through that great WebGL-illustrated article about "How to fold a Julia fractal" (google it), while coding, way better than the five stapled pages I had when I was 15 :) :)
Yes it's probably tedious, but when you're really into something, at that age, you just persevere because you can :) Also young children are incredible on touchscreens, small fingers and they grew up with them :) [I'm the opposite, I have some stress/burnout related tremors in my fingers, some mornings (when it's worst) I can hardly control the device's apps, let alone typing]
Edit: happily I'm wrong. Quoting ajenner below, "there are emulators (for other targets) which do emulate NTSC decoding properly, but until I did the research for this demo nobody understood how the CGA card generates composite signals well enough to be emulate it properly. I have some code which I hope to be adding to DOSBox (and any other emulators that want it) soon."
I don't feel the ethics of open-source apply here or to any works of art.
Programs like Microsoft Word, which have a near-monopoly on the work that literally billions of people do everyday to be productive and feed their families, when not distributed in a free manner, are tools of unjust power.
I don't feel this person's expressive work, a lifelong dream with no monetary gain, that might merely provide a few weeks of bliss and 15 minutes of internet fame a little inspiration for the rest of us, then become horribly forgotten to the sands of time, is a tool of unjust power.
> alpha-male thing going
I am sorry for whatever you've experienced that leads you to sexist comments like this. I hope that it's able to work its way through your life until you reach the point that you can simply share another person's joy without feeling entitled to have a piece of it yourself.
I can see that my words can be easily interpreted like you did but I was quite literal is stating that the culture is WONDERFUL. I have no hatred against them although I'm definitely an outsider.
Also, I was literal when I referred to the closed source of these creations. I don't demand or expect them to release any source code. I'm merely wondering whether these demos would be MORE interesting with the source code released as well. As always. In 2015 it seems slightly weird that they don't.
To be clear, I definitely share the joy the authors feel accomplishing these feats. Deep respect.
But I don't get your sexism comment. I maintain that the sub-culture involves some behavior that can be described as "alpha-male". Maybe I was inaccurate with the wording. Could have said "competitive" as well.
I don't think you've ever been to a demoparty :-)
If there's any "alpha male" behaviour whatsoever, it's purely in a self-ridiculing way. It used to be there, back in the nineties when all demosceners were insecure teenage nerds and some of them felt a need to compensate for something. That part of the scene is gone for twenty years now, but it's still a lot of fun to make references to that part of history.
My favourite example of this is the demo "Regus Ademordna" by Excess . The title is the reverse of "Andromeda Sucks" in Norwegian. Andromeda is another Norwegian demogroup who had just made a reappearance in the demoscene at the time after having been gone since those nineties. They hadn't gotten the memo that all that alpha male stuff was something of the past, so they took serious offense, much to the enjoyment of the rest of the scene.
The main reasons, afaik, for demo sourcecode to not be released are mostly circumstantial. Either the democoder forgets about it, because a lot of the code is one-off stuff and the next demo is going to be fresh and new! The other reason is that their code is a terrible mess of glue, ducttape and kludges, hacked together moments before the compo deadline (see article ;-) ). The democoder intends to clean up the code (see elsewhere in this thread ;-) ) but then forgets about it because after-demoparty-crash. Occasionally, however, they rest up a bit and later on write a great article about the tricks they pulled (while promising to clean up the code and release it "soon") -- saying this with a great big ;-) of course.
And when something happens to be not released, I've always found just mailing the coder about it incredibly helpful, they're happy to explain, I've made great friends, and learned amazing stuff.
Anyhow, omg Trixter :) You don't know me but thanks for all the work you did on the Hornet archive!
 just a random 4k coder, ritz, https://www.youtube.com/watch?v=620CmQ9CJoU / http://www.pouet.net/prod.php?which=343
There are probably a bunch of weird limitations on the adjacent colours you can achieve, but it’s an impressive effect regardless!
(Here’s the obligatory wikipedia page: https://en.wikipedia.org/wiki/Composite_artifact_colors It sounds like the demo developers have taken this effect & turned it up to 11.)
"QPs 512 mode is pretty straightforward; its only color limitation being that it can display a maximum of 40 colors on a single scan line. Mode 4K uses a special technique called "interlacing" in order to display a supposed 4,096 "colors
see also http://www.atari-wiki.com/index.php/ST_Picture_Formats
Opening the left and right borders however required doing it for each line, I recall, which uses a lot more CPU time. (Unless, of course, there's a trick I don't know!)
Spectrum 512 uses NOP timings to swap the palette at regular intervals throughout the screen; the "4096 colour interlaced" mode just flickered between one colour and another on alternate blanks to give the visual impression of flickery intermediates (before the STe came out, which used the high bit of each nybble to actually have 4-bit-per-channel palettes of 16). That technique, in turn, came from the C64 scene, as did the border trick, though I think they wrote the screen address?
What's old is new again: plenty of lower-end TN LCD panels pull the same colour trickery to fake 8-bit colour from 6-/7- bit panels (or, reportedly, 10-bit deep colour from 8-bit ones in some cases).
This demo is crazy. I don't think CGA even gives them a VBL to hang off! Wonderful.
On the C64, you could pull the border in the width / height of one character in order to support smooth scrolling (coupled with registers to set a 0-7 pixel start offset for the character matrix). This was done so that the borders wouldn't move in/out while scrolling.
By turning this option on/off precisely timed, the VIC graphics chip never found itself at the "correct" location to enable the borders, and so never did.
Opening the top/bottom borders was done very early because it didn't require much timing.
Opening the left/right borders with static sprites happened soon afterwards.
Opening the left/right borders with moving sprites was particularly nasty because the VIC "stole" extra bus cycles from the CPU for each sprite present on a given scan line, so if you wanted to move sprites in the Y position and open the borders, you needed to adjust your timing by the correct number of cycles for each scan line, often done by jumping into a sequence of NOP's. There were additional complications, but that's the basics.
I think DYSP (Different Y Sprite Positions) on C64 was first achieved in 1988.
That's known as temporal dithering/FRC (http://en.wikipedia.org/wiki/Frame_Rate_Control ). To get the 2 more bits of "fake" colour depth requires a 4-frame cycle, on which you display either the darker or lighter colour in sequences like 0000, 1100, 1110, and 1111. It's a form of PWM and the same technique used to drive those large outdoor graphic LED display signs, although to avoid flickering the frequencies are in the tens of kHz.
 Example: https://www.youtube.com/watch?v=TfVe9l77zLU
You could also change those 16 colors for every scanline (horizontal line).
Overall there was a lot of control, but software such as Deluxe Paint did not support full possibilities of HAM mode.
Oh and, btw, EHB (extra halfbrite) mode has 64 colors. 32 colors palette + second 32 colors with brightness halved. http://en.wikipedia.org/wiki/Amiga_Halfbrite_mode
I wonder how much you could improve the effect (i.e. the woman after the 4:00 minute mark) by making it move. From the way it's drawn I'm not sure if you could get even 2 fps though.
Probably a lot less tricks up the sleeve are possible (especially with 3D accelerators and dependency on a lot of proprietary AND very complex software)
Or maybe just drop to the framebuffer and push pixels like it has always been done
Sorry but doesn't that mean there are a LOT MORE tricks up the sleeve possible? They might be hard to find if you have to reverse a proprietary driver, but why not? :)
Also, I would like to propose a little challenge. What do you think that could do you archive on this "virtual" computer :
- Specs : https://github.com/trillek-team/trillek-computer
- Implementation/Emulator : https://github.com/trillek-team/trillek-vcomputer-module
In a short resume :
- 32 bit RISC CPU running at 100KHz (to 1MHz)
- CGA like text mode, but can be remaped to any desired RAM address and the font can be changed on the fly. Fixed palette of 16 colours
- VSync interrupt + two timers
- 128KiB of RAM (to 1MiB)
- Floppy drive (max 1.2 MiB)
Extra : Displaying it against a Commodore 1084S monitor -> http://imgur.com/GuTVEdj
I see this demo as somewhat of a challenge to the emulator authors.
DOS is more something you run "on" than "in". It's just a filing and utility layer, no task scheduler, no memory protection.