Hacker News new | past | comments | ask | show | jobs | submit login
How a 64k intro is made (lofibucket.com)
473 points by tiborsaas on May 22, 2017 | hide | past | favorite | 59 comments



There is a nice documentary on the history of the demoscene, with some introspection to the creative process behind a demo/intro. It was made by a Hungarian team, who has other interesting documentaries on underground culture.

It is also available in English, at: https://www.youtube.com/watch?v=iRkZcTg1JWU



FYI, there's a lot of interviews in that video that are not in English.


I'm sorry to hear that. There should be CC available, but to be honest I have only watched the Hungarian version in full length, and only knew about the existence of the English version.

I hope you still find it interesting.


there are English subtitles available, you just need to turn CC on in Youtube and click the gear next to to make sure it's set to English.


It was interesting. I wish I knew Hungarian. :)


The closed captions are pristine. After a short while, I forgot I was watching something not entirely in native English, to the point where I cant't now recall what was in English vs what was translated. Huge cudos to those concerned!


Make sure to turn on the subtitles.


In the other sciences, you strive to take the complexity of the world and to reduce it to underlying principles. In Computer Science, you start with very simple rules and strive to build as much complexity as possible. This is what I really like about demos: they represent what I think makes programming interesting.


It seems like we do go in the other direction, too, though. When computer scientists encounter some new problem domain—e.g. rendering photorealistic scenes at 60 frames per second—people analyze the domain into various 'primitives' or underlying principles. We don't have the same notion of 'truth' in selecting a set of underlying principles like you get in the sciences, though, which means multiple sets can survive and be used in different contexts (e.g. using raymarching and distance functions, versus using triangle meshes and Phong shading)—there doesn't only have to be one.

Then at some point, other engineers use those 'underlying principles'/primitives in order you build up particular complex instances of whatever domain was analyzed in the previous stage—but it does start with the science-like analyzing and 'theory building' (not literally—it just has an analogous structure/role).


For wanting interested in this stuff, check out pouet.net (http://www.pouet.net/prodlist.php). They have demos as small as 32b. Most of them have YouTube mirrors and usually they have the executables if you'd like to verify yourself.


At first I thought you wrote "32kb" and I was going to correct you.

Note to others: that's 32 BYTES! :)


here's an 8 byte demo :> http://www.pouet.net/prod.php?which=63126

asm file is included


Yes most have what we refer to as "renders", so you don't have to load up your favourite hardware to run it natively.


If you like this stuff then you may also be interested in an excellent streamer (ferrisstreamsstuff - mentioned in the extra stuff section) who often makes demos:

https://www.youtube.com/watch?v=V8JXraZPkh8&list=PL-sXmdrqqY...


OMG, thank you for this. I love looking behind the scenes in video stream format.


These are always so wonderful. The size limitation is the icing, they are always so gorgeous on their own. The advantage is always in seeing what cool tricks they do to make awesome stuff that is defined by such little information.


Great article, the demo scene was what got me into programming as a teenager back in the 90s and is still inspirational today!


Yup, same here!


God it's a beautiful intro. Really like that science fiction like feeling to it!


After quickly skimming this article, I got sucked down the rabbit hole of "ray marching" - in concept, and implementation (mainly as a shader). The idea and how it works was fascinating, if a little above my head mathematically. I got the general gist of the idea, though; it's kinda a combination of ray casting and ray tracing, where you send out the rays and step along them, but in ray marching you don't cast secondary or tertiary (or more rays) like you do in ray tracing (thus limiting the processing, but also the level of realism) and instead continue to "step" (ie - "march") along the ray past the intersection, to gather more information (and thus you can do volumetric stuff with transparency - smoke, water, clouds, marble, skin, etc).

Stepping is done using something called "sphere tracing" - where for each step along the ray, you compute a sphere, and where it intersects the scene, and only step that far; as you get closer to surfaces, the spheres become smaller (and so do the steps and more computations) - this keeps you from sampling along the ray where there is no scene, and stepping along the ray in larger "jumps" to keep the computational reqs down (it has limitations, particularly along edges of objects, and complex scenes).

For 4k comps (and in general, I think) objects of the scene are described algorithmically (ie - you don't create a model of a sphere or torus or cube, but rather the mathematical representation of it) - so intersects with the ray become a bit trivial. At the same time, complex scenes can become tedious to lay out, so you typically see tessellations and repetitions of scene objects, or other mathematical constructs are used (fractals and the like) that can generate complexity from simple representations (thus, procedural rendering and representational concepts come into play of course).

There was a ton more involved (found interesting shadertoy demos from known authors, going over the concepts of how raymarching and related techniques are used - especially in the context of GPU shader rendering, where it is frequently used in 4k compos - the idea of "two triangle demos", doing everything in the shaders) - and I probably got a lot of this wrong or over-simplified (as I said, I felt a lot went over my head in the math arena).

But I think I got the gist. Regardless, it was a fascinating exploration, and taught me some new things I didn't know about until then; I haven't kept up much with the demoscene since I stopped playing around with my Amigas a long time ago (aside from the occasional amazing demo or whatnot here and there - especially those authors who like to push their Craft - hint-hint - using microcontrollers). So it was nice to see what "state of the art" was. Although - milkytracker is still being used as well - which is nice to see.


I think that's a pretty nice overview you have. One thing I would clarify is about how the objects in the scene are represented mathematically: it's done using 'signed distance functions' so every object you have in the scene is actually created through one of these functions which takes a point in 3D space and returns the distance to the nearest point on the object's surface to that point—which does, as you say, make the object/ray intersection testing trivial (the return value of this function is the radius of the sphere used to step along the ray).

The other basic components when creating scenes to be raymarched are distance functions which combine other distance functions together. The simplest cases are just union and intersection, but it's easy to get more interesting things like smoothly blending objects together. This page is a nice reference of these 'combining' functions as well as distance functions for geometric primitives: http://mercury.sexy/hg_sdf/ I think that combining power is one of the main reasons to use raymarching/distance functions: building procedural content this way is faaar easier than trying to knit triangle meshes together.

Additionally, if anyone wants to go a little further on the subject of building up sophisticated content by raymarching distance functions, this is an excellent talk on the subject: https://www.youtube.com/watch?v=s8nFqwOho-s


Here is another page on distance functions...

http://www.iquilezles.org/www/articles/distfunctions/distfun...

...written by one of the authors of this amazing 4k (four kilobytes!) demo:

https://www.youtube.com/watch?v=jB0vBmiTr6o


Heads up people who went to the article and downloaded the actual binary itself to see how it renders on their own box. Detected a trojan in it!

I was still able to run it in an sandboxed environment and it worked fine, and kaspersky didn't chirp, so for an outdated or incompetent AV, you wouldn't even notice.

ESET found malware in the demo file they have up --> guberniya_final.zip http://imgur.com/a/1R4Kl


It's most likely a false positive. The packers etc., the demoscene uses employ the same tricks as malware does and thus trigger heuristics in AV software.


Slightly more accurate to say that "heuristics" in most AV software is simply shit.

It's not even the real clever hacks and actual tricks that are used in tiny size compos that trigger the AV. Stuff like polymorphic self-modifying code (to name one actual trick) is generally only used for size compos smaller than 4096 bytes.

It is really just the executable packers that trigger these shitty AV "heuristics"[0]. Not just the demoscene-tools like kkrunchy, but also more "mainstream" ones like UPX. IIRC, certain versions of Opera were packed with UPX, also occasionally triggered the odd virus scanner.

It pissed me off here because they're smearing one of my favourite art forms. Most of the time they don't even qualify it as a "possible threat", but dig up an actual scary-looking malware name from the database and say it's GenericMalDestructoTerroristLoader.1 or whatever.

It's ridiculous. Imagine a world where most virus scanners trigger when they detect minified/uglified JavaScript. Certainly, web-based malware exploits use that for obfuscation.

For commercial AV vendors, false positives are in fact good for business. A virus scanner that never reports anything ever (because you have the good sense not to click on attachments or unexpected download/install/admin prompts) doesn't have a lot of perceived value. That's why at some point, all the virus scanners also wanted to scan your PC for "tracking cookies", which is not their job at all, but made it seem like they do something. Compare to something like MS Security Essentials, whose incentives are the opposite, and aligned with yours: they want Windows to appear as a solid and secure OS, so the scanner keeps quiet doing its darnedest to keep the user from getting hacked.

[0] Weirdly enough, when you apply a packer to a piece of malware that would otherwise be detected as is, it suddenly foils the virus scanner. Especially if you stack two different packers. Can't find the link where they researched this but IIRC, Kaspersky called out the research as "irresponsible".


> but in ray marching you don't cast secondary or tertiary (or more rays) like you do in ray tracing (thus limiting the processing, but also the level of realism)

Not necessarily. The marching is used to sample a volume to find intersections (or density), so you can absolutely march secondary rays for shadows, reflections etc.

http://9bitscience.blogspot.se/2013/07/raymarching-distance-...


At first I was surprised they didn't use any assembly language. However, GLSL is their replacement...


Especially in 64ks, assembly isn't particularly necessary - limited use of the standard API and a good exe packer will almost always get you under the limit (especially when a lot of your code is GLSL, as text compresses well).

Smaller intros like 4ks, on the other hand, will often use assembly (although completely shader-based ones are becoming more common).


I wonder how common GLSL is for demos, though; as far as I know you have to have the actual source text and send that to the driver at runtime for compilation. In contrast, shaders for DirectX are AOT-compiled, greatly minimising their size. An executable packer might mitigate that for the most part, although it probably depends.

In this particular case they noted that they weren't size-constrained anyway.


Both GLSL and HLSL are very popular, for both 64kB intros and 4kB intros. Even Elevated is storing the actual text in the binary (http://www.pouet.net/prod.php?which=52938). Compressed text can be quite small.

I wrote a minifier specifically for these use-cases and it has been used in many demoscene productions: https://github.com/laurentlb/Shader_Minifier On a 64kB intro like Guberniya, it can save a few kilobytes (but size was not an issue for them) - after compression.


Zipped GLSL (if you're using short variable names) is usually smaller than the corresponding DX byte code, so it turns out to actually be an advantage.


The article mentioned that the final executable was packed again, so the GLSL source code is also stored compressed.


Actually a better comparison for shader languages is doing DMA with the Blitters on the Amiga.

https://en.wikipedia.org/wiki/MOS_Technology_Agnus


Actually the interesting part in Angus is the Copper, a small "co-processor" (which can trace its roots to the Atari XL/XE series and its ANTIC processor) that could execute instructions triggered by the beam passing specific coordinates on screen.

The obvious advantage of this is that you do not need to interrupt the 68k CPU, which in the era of single digit Mhz speeds, was very nice.

The blitter (and other things) could be controlled by the Copper..


Yes, the Copper is a very cool thing. I've done insane copper lists with up to 52 splits per rasterline(each split is 8 pixels wide), and then a unique new set of colors for each rasterline. The result is very mesmerizing. It can look like this: https://youtu.be/d1axngYxuuo?t=3m19s and this: https://www.youtube.com/watch?v=fFeHd5hoRyM

The Copper can also wait for the Blitter to be ready, and the Copper can give the Blitter new instructions and start a new blit. The circle would be complete if the Blitter could blit into the hardware registers for the Copper and other chips, but it cannot. :-( I think the Atari STe Blitter can do that though.

But you can use the Blitter to modify the Copper list, so perhaps they together are Turing complete?


Yes, that is what I had in mind, but could not remember any more the details how it used to be.


Amazing. Brings me back to the 90 and my days with the FUEL coders when putting asm as first line of your demo was the thing to do.


64k + gigabytes of system libraries and GPU drivers.


It wouldn't be HN without an immediate dismissive comment!


Didn't intend to be dismissive. I think they're awesome. It just seems incorrect to conflate one of these 64k demos with one that runs on, say, a Commodore 64. Context is needed. There's a lot more code actually running.


On the other hand, the C64 had standard hardware without a need for abstraction layers, and the hardware was trivial to use from assembly - just put numbers into various memory addresses. Plus the HW natively supported things like sprites and sound synthesis.

I guess one of the most difficult platforms for demo programming (in terms of having to write things from scratch) was PC in the DOS/Win16 era. You could get into 320x200x8bit graphics mode quite easily (the so called Mode 13h) but beyond that you were on your own.


Yeah, that's true. Still remember the hardware register on the Atari ST $ff8240.

Best think about standard hardware was that, if somebody wrote something faster/better than you did then their code was demonstrably better - you couldn't blame it on drivers or anything else.

I had this situation with somebody called Darek Mihocka who wrote an accelerated text function for the ST. I beat his code, he then came up with something nearly twice as fast as mine! I went crazy wondering how he did it until the 'move.p' instruction came to me, I shit you not, whilst I was asleep.

Good times, and a great way to learn how to code and more importantly what really happens on the machine underlying it.


Ahh, mode 13h. Fun times.


That's a straw man. Nobody's conflating anything. This is a PC intro, not a C64 intro and it's advertised as such.


Quite true. I personally don't have a problem that PC 64k intros today will leverage the GPU as much as possible, with all the help from the driver code that that implies. It's how they stay current and awesome. Of course anyone writing a 64k intro with no help at all from the GPU except perhaps context creation and teardown is just giving themselves more creative limitations that I would respect.


There're 64k demos that use software rendering only.



Do you know if that's specific category?


Indeed - I used to write them when I was young and had more time :)


It's unclear what exactly you're agreeing is "true". If you're agreeing with the "incorrect to conflate", then it is true that it would be incorrect, but not true that this needs to be pointed out.

With only a little bit of investigation, or even participation, it quickly becomes entirely clear that compos divide entries by hardware, and people in the demoscene don't ask oldschool 64ks to measure up to PC 64ks.

I recommend having a look at the various compo blocks on pouet: https://www.youtube.com/user/RevisionParty


[flagged]


We've banned this account for repeatedly violating the guidelines after we've asked you many times to stop. We're happy to unban accounts if you email hn@ycombinator.com and we believe you won't do this any more.


Ah, you're just really bad at communicating and get upset when people don't recognize your innate "cred".

I tried to be nice to you because what you said was ambiguous, period. But i guess if that's how you react then you deserve those downvotes.


They mostly provide hardware abstractions; I suspect that the intros, if coded in 100% bare metal (Amiga style), wouldn't be that much larger.


Indeed. PC 64k intros in the 90s were just as popular. Initially running on MS-DOS but later moving to Windows. Unless you have some memory-mapped interface to the underlying hardware (like the C64/Amiga) you'll always have device drivers, I don't see how that's "less impressive" at all.

It's not like a device driver has an undocumented "MakeAwesome()" call that modern 64k intros are abusing.


Except for the use of OS-provided General MIDI soundbank (Windows has gm.dls), which seems to be common practice now.


I thought this was going to be about how to introduce people to investors who put in $64k :)


Why is this being downvoted so heavily? Given the forum it seems like a very reasonable expectation.


Simply put, it does not add to the discussion. It is mildly amusing a personal quip, that invites no further insight. It is also not relevant to the link, which is not always startup funding related. HN is picky about the commens it promotes, and most of us feel thats a good thing.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: