Really well done. It's a bummer that source code are rarely released for demos.
For me one of the best of all times is Second Life [1], by Future Crew. Developed in 1993 nonetheless, it was a big breakthrough at the time, with almost 10 minutes of animation, and in early x86 hardware. Source code available [2].
Demo source code is not often released, but there are many exceptions. For example, if you're interested in 4kB intros, look at this: https://github.com/in4k
Since 4kB intros are typically done in a shader, you can also look at https://www.shadertoy.com
For Ctrl-Alt-Test specifically, we put our source code on GitHub (but not the most recent productions)[1]. The tool to minify shaders is also on GitHub[2].
So I think you can get a pretty good idea of how we did it. If you have any question, feel free to ask. I can give more details here, or it can be part of a future blog post.
Thanks for sharing! Your F – Felix’s Workshop[1] is pretty impressive, and specially for the 64KB category. Thanks for publishing the source code[2] as well.
Given that the demoscene evolved from the cracking/warez scene, there is an implicit notion that if you want to figure out how something works, you should inspect it yourself with a disassembler.
Most of those early demos were hand assembled anyway, so the "source code" is a bit of a vague statement. You might be getting a bit more structure/comments in the original .s file but maybe not. Most of those demos required heroic program shrinking to fit in memory, and that's almost always a disaster for readability.
You typically join (or form) a group because you get along with them. I'm in five groups, some with some pretty great releases (not mine), and I'm a pretty shitty graphics coder.
In fact your phrasing "elite skills" made me chuckle a bit, it sounds like the kind of thing a 16 year old would say in 1994 :) (no offense intended)
I think most people don't release source code because a) it's just not really part of the culture, b) the code is shit and c) people want to make demos, not READMEs. I mean, in the end this is about art and expression as much as anything. Electronic musicians don't usually publish their music software source files either. There is not really any elitism behind this.
c) is also why write-ups like this one are so uncommon. (Cool stuff LLB!)
(I suspect this method also permits approximating the plastic number [ https://en.wikipedia.org/wiki/Plastic_number ], but so far it seems nobody has figured out a way to do so yet. I mention this due to the architectural relevance of the plastic number.)
I speculate that leveraging this fact could likely permit even more complex architecture in demos. (And, to mention an off topic point, it likely has some relevance for automating UI layouting.)
So many issues. First, that paper doesn't claim what you say it does. Second, it's just not a great paper. Even if we accept the sketchy premise, it fails to distinguish overfitting from a successful test. Numbers of the form (a+b*sqrt(2) can approximate almost any number to within 0.03 units, yet there's no discussion of fitting errors to address it.
I used to do a lot of demoscene stuff as well [1]. I've been trying to get back into it but I've found that all of the software engineering I've done in the meantime has kind of driven the "just get it working" mindset away somewhat.
I used to just write code that "happened" to work, more or less. Nowadays I would want to get unit tests in place, make it cross-platform if at all possible and get a nice GUI going for easy tweaking. It's made it impossible for me to get anything done in the limited free time I have :) ideally I'd write it in Rust, but there, too, I am waiting for tooling to catch up. I get the feeling that I'm not so much waiting for the tooling, but I'm using the waiting as a sneaky way of procrastinating and fooling myself...
The tooling for Rust is slowly becoming quite good. Intellij + The Rust Plugin is a really good development environment. Other editors with RLS are not as advanced yet though. So if you need an IDE environment for Rust, then Intellij is really good already.
Yes, it's getting to a point where it's not really an obstacle anymore. I can even debug on Windows using the latest version of clion, what is the world coming to?
My suspicion is that this problem you describe is one core reason why "a raymarcher in a single shader" has become such a popular demo platform. Nobody writes unit tests inside shaders, it just.. doesn't make much sense. The environment sort of forces you to stop thinking like an engineer and start thinking like a coder.
This is very true. Having said that, even with shader I've managed to go down the rabbit hole. "Perhaps this shader can be generalized so it'll work for every vertex format...hmm, it should probably dynamically recompile then...once I'm doing that I might as well start linking arbitrary snippets of code then..." etc.
A previous discussion of the source code of a 4K demo, 'Elevated', by Rgba, with a link to a really good dive into it by Iñigo Quilez, who worked on it:
hell yeah, just looking at the length of a stacktrace under our good old Java EE makes me sick (and I'm not even counting the additional calls to query the database)
(I understand there's a trade off between productivity and performance, but nonetheless, it's scary)
This is the kind of stuff I grew up on, so when I’m critical of the software and the implementations here, the bloat, the unnecessary dependencies, the slowness... this is the standard I judge them against. The mentality and the techniques in the article are applicable to any kind of software. When applied correctly, they make any software small and fast.
Sure, but that's a very small fraction of people vs those that don't really face those constraints (not that there isn't value in "thinking small and efficient")
Well the first optimization would be to roll my own libc (not that hard at all :) ) where i would load api calls by ordinal and only those that i really need (ok, for the sake of compatibility, i would dig them from IAT table based on 16bit hash (folding fnv32) of API name + dll name). Same for all other libraries, everything compiled with agressive optimization (at least /O2). Next step would be to take compiler that is minimizing the bloat, tinyc (https://bellard.org/tcc/), compiling 32 bit binaries just to save some space. Maybe even go for .com to avoid PE header bloat. At the end compress everything with something like upx, but probably i would roll my own PE compressor. Instead of using functions, the macros would be used, absolutely no classes, #pragma pack(1) all structures (i never tryed what tinyc does :D). Also merging PE sections will save some bytes.
Size optimizing is fun and you can learn a lot but it is dying art, probably 99.9999% of todays developers dont understand what I have written in first part (today, you are learning the programming, but very indequately what the OS does, actually typical today programer understands the programing but is clueless what his code does on low level) ... but +1 for anyone that goes into that direction, my boss at my first job was saying that good software fits to one 1.44 floppy but this is today violated by HHLL. Well business wise no need for that unless you are making malware, but still cool.
I think one of the biggest challenges, size-wise, is coming up with good algorithms for procedural content (textures, 3d meshes, camera paths, audio/synths). Code for audio playback and for doing directx/opengl scene rendering shouldn't be too hard to keep small, but you really need to work to get interesting content to fit, since you can't really include much in the way of of bitmaps/audio samples/3d meshes as binary data assets.
this. For a 4KB intro, well, coding skills/cleverness makes a difference. But at 64KB, the artistic side can exist and that's where you can make a difference. Farbrausch stuff is cleverly coded, but the aesthetics were just ahead.
TCC is not a good compiler for sizecoding. It is itself very small, but that's because it doesn't do much in the way of optimisation at all and generates lots of redundant instructions that you can't make up for with executable compression.
One thing you're missing:
Smaller size before compression does not necessarily imply smaller size after compression.
For instance, /O2 has a tendency to generate code that is difficult to compress. So it is always better to keep in mind how you achieve smaller file sizes after compression, and not worry so much about data sizes before compression.
Most demoparties forbid linking by ordinal these days, since ordinals tend to change between Windows versions. Importing by hash is the go-to method now.
Maybe I wasnt clear enough, libc functions implemented as calls to GetProcAddress (by ordinal/hash) functions directly. :) But just those that you use :)
You still need to get function pointers using GetProcAddress or by searching for them after LoadLibrary within dll exports table (this is the idea with hashes). The third option is to leave it to the PE loader, but this is burning space in PE import table.
The music part of Elysian is really cool since they wrote a synthesizer and a tool allowing them to make music in Ableton Live (a very popular music production application) and convert it back to code for their demos. Ferris (the guy who wrote most of it) has a video on YouTube explaining how it works and I was absolutely blown away by it.
I don't have three off the top of my head, but running through the entirety of farbrausch's releases starting with "The Polular Demo" [1] will give you a sense of awe.
For me one of the best of all times is Second Life [1], by Future Crew. Developed in 1993 nonetheless, it was a big breakthrough at the time, with almost 10 minutes of animation, and in early x86 hardware. Source code available [2].
[1] https://www.youtube.com/watch?v=rFv7mHTf0nA
[2] http://fabiensanglard.net/second_reality/index.php