Hacker News new | past | comments | ask | show | jobs | submit login
Source code for 4kb demoscene production “Elevated” released (scene.org)
384 points by jsheard on June 6, 2016 | hide | past | web | favorite | 147 comments

Release info: http://www.pouet.net/prod.php?which=52938

Binary: https://files.scene.org/view/parties/2009/breakpoint09/in4k/...

Video: https://www.youtube.com/watch?v=jB0vBmiTr6o

Everything you see and hear is procedurally generated by the 4096 byte executable, in real time. It still blows my mind 7 years after release...

Here is a presentation put together by one of the guys behind this release that explains a lot of about how it was made.

Yes, thanks. I'm not sure where my head is today.

Any mirrors, that is down.

Ah, so that was Iñigo Quilez!

The 4kb genre in the scene is incredibly vibrant the last few years. I think this demo really helped kickstart that.

4KB using directx/opengl, very different from old time writing stuff directly in the video card.

I bet they'd still prefer to write directly to the video card, but it's kinda hard nowadays to access GPU resources without an API.

new constraints though... you got a fit a couple of shaders in there too :)

Exactly, 4kb on top of the hundreds of megabytes of libraries and drivers.

Somehow you make it sound easy or even belittling it.

I'd say this is quite an achievement anyway when I know I'd use more space than that just to store a single jpeg or mp3 sample.

I think what parent was trying to say is that it's absolutely an achievement relative to storing a jpeg or mp3 sample, perhaps not so impressive next to the old 4k intros.

...as opposed to another hundred MB or so on top of that.

Could you show me what you can achieve with 4 KB on top of hundreds of megabytes of libraries and drivers?

You don't have to be a car manufacturer to criticize a car and not an architect to criticize a building.

"If you can't do it better, shut up" is a stupid argument.

"That's easy, anyone can do it" is similarly stupid, especially when anyone who actually does stuff like this says it's impressive.

But nobody said that.

There's a pretty clear implication.

Even 4KB demo in browsers is a thing these days.

the part I loved best was how those 4kb uncompressed to over 250MB of GPU instructions

For anyone interested/thinks this stuff is cool, the author of Elevated has made a website for experimenting with real time pixel shaders in WebGL: https://www.shadertoy.com/

Some pretty incredible things have been done there.

A friend of mine also has this project, http://shaderfrog.com/

Let's you experiment with and compose different shaders.

This is brilliant, I use shadertoy a bit to quickly test out where I've gone wrong in my fragment shaders (which is usually the whole thing).

This actually seems like it'll be much better for the job.

I'd love to see ISF export added too: https://www.interactiveshaderformat.com/spec

Holy crap how have I not heard of this? This already scratches an itch I have for no good vertex shader editors. I'm looking at the "composed" graph editor thing, a little confusing but it looks really powerful. Thanks for sharing.

He has a version of Elevated which runs on shadertoy:


(iq is him.)

Alas, it doesn't run on this Chromebook...

After viewing LFT's work in using an ATMEL microcontroller as a demoscene platform: http://www.linusakesson.net/scene/craft/index.php

...I had the thought a possible frontier in demoscene is make your own hardware out of discrete components to run your demo.

The MOnSter 6502 would count - http://monster6502.com/

Just because that wasn't enough, lft has also written a demo which created a custom CPU/GPU on an FPGA to run a demo:


Most programming jobs must feel terribly boring for anyone who managed to write that.

Here's what he's been up to lately: http://mygtc.gputechconf.com/quicklink/esLu6kr

Will an online recording be available of that talk? I'd love to watch it.

Officially, I think the videos from GTC2016 are behind a free registration wall. You can register here: https://mygtc.gputechconf.com/form/account

Unofficially, you can find this particular talk here: http://on-demand.gputechconf.com/gtc/2016/video/S6860.html

Sweet, thanks!

iq of rgba worked at pixar.

In year 2000 when I was 16 years old 64k fr-08 by farbrausch told me that I know nothing about programming ;)

[1] https://www.youtube.com/watch?v=Y3n3c_8Nn2Y

Indeed, and for me, before that, Future Crew's Second Reality.

Second Reality was the best. First demo that I saw was Spaceballs' State of the art https://www.youtube.com/watch?v=c5kuYfTCGLg and I vividly remember of thinking 'wtf is that, a video or what?' and then Second Reality a year later or so - https://www.youtube.com/watch?v=rFv7mHTf0nA But then (I look all and every demo out there by then at the age of 13) I saw Mars demo https://www.youtube.com/watch?v=_zSjpIyMt0k which hooked me forever on the path of graphics programming.

Oh yes! I remember Mars! At some point, the was a text floating around the net which described how it worked, that was mind-boggling for the 14 year old me.

Although they generally didn't compete in the 4kb genre, Farbrausch open-sourced many of their demos and demo-making tools a few years ago [1].

fr-041: Debris [2], and kkrieger [3] (the 96kb FPS) are particularly worth checking out for the uninitiated.

[1] https://github.com/farbrausch/fr_public

[2] https://youtu.be/wqu_IpkOYBg

[3] https://youtu.be/2NBG-sKFaB0

Does this mean that lots of 3:30 minutes 1080p videos could be compressed into 4kb?

EDIT: We can be generous and say 40kb for sake of adding more colours / etc.

This is actually an insightful question.

The practical answer is no. There is an unimaginable amount of possible 3:30 minute vidoes—far more than the number of possible 4kb or even 40kb files.

To be fair, most of those possible vidoes are just noise. We don't have to be able to compress those because people don't care if one video of noise is different from another. We also don't have to reconstruct the video perfectly: as long as it looks more or less the same, the audience is happy. (This is called "lossy compression".)

But even with these caveats, there is no realistic method for compressing realistic 3:30 minute videos that well on a computer. We likely can't do all that much more than current compression algorithms without a different set of tradeoffs. (Like being better at some videos but worse at others.)

That said, a big part of how compression works is by relying on information already present when decompressing. This demo relies on having a particular kind of chip with certain capabilities (ie a CPU and a GPU) and presumably some standard library functions... etc.

How well could we "compress" videos if we had more information available when decompressing? Here's a fun thought experiment: what if we had a model of a human mind? We could then feed in a pretty sparse description and have the model fill in the details in a natural intuitive way. It would be very lossy, but the results would be compelling.

And you know what? That's a decent mental model of how speech works! If you just look at information content, spoken words are not very dense. But if I describe a scene you can imagine it almost as if you're seeing a video. This works because we both have the same sort of brain as well as shared experiences and intentions.

You can think of speech as incredibly effective—but also rather lossy—compression.

It could be very useful to deliberately pursue SUPER lossy compression. As long as no one can really tell based on the end result, it doesn't really matter.

For example, if you can only tell something was lossy by directly comparing two instances of the same video during playback, then that's probably good enough in most situations.

It occurred to me that we could compress the hell out of written works by translating them into some super dense language, and ultimately only retain the basics of the meaning/concepts/some of the writing style. Then can re-translate that back to whatever language we want to read it in.

For compressing pictures or videos, there could be some similar translation to a much more compact representation. Would probably rely on ML heavily though.

4K of English text is a couple of pages of a novel, enough to describe a character and a situation, maybe an interaction. A good writer can conjure up a whole world in 4K... but probably not a description of an arbitrary 3 and a half minutes of activity.

That being said, speech is far from the best compression we can ever hope of achieving.

Nice insight you brought with the CPU and the standard libraries being a relevant factor, hadn't thought of that.

Your thought experiment sounds more like a "codec" than a procedural generation. I guess it is an arbitrary line given that we are using CPU, etc. But the bigger the decompressing "model" the further away from true 4k compression we are.

Take a look at https://en.wikipedia.org/wiki/Kolmogorov_complexity

The Kolmogorov Complexity of a video (or any other data) is the size of the shortest program which outputs that video then halts. This 4k executable is similar in spirit, but also follows strict rules about efficiency: Kolmogorov complexity places no time limits on that shortest program, whereas this program must output pixels fast enough to make the video realtime.

Sorry, I thought it was obvious, but the question is: Could procedural generation be used to achieve amazing compression rates given a currently impossible to code algorithm?

No, only very specific videos, like this particular one. The art is in finding a pretty video that you can render in 4kb, not making a pretty video and then reducing it to 4kb. The latter would most likely be impossible.

"39. Re graphics: A picture is worth 10K words - but only those to describe the picture. Hardly any sets of 10K words can be adequately described with pictures."

It's the pigeonhole principle; there are only a few long videos possibly encodable as short programs because there are only a few short programs in the first place. To get compression performance, one has to target an ever smaller subset of possible videos, which eventually starts becoming an AI-complete problem.

> It's the pigeonhole principle

Is it really? Could a human meaningfully distinguish between 2^4096 different 4 minute videos?

Sure. 2^4096 is 10^1233. Let's just look at dialogue. Even if you limit yourself to boring 5-word sentences with 2,000 possible words for each position (subject verb preposition adjective object), 5^2000 = 8.7 * 10^1397 which means in the very first sentence you've got 10^164 times as many videos as you could possibly index with only 4096 bits.

Late addition: I thought I fixed all the stupid math problems before I posted this, but it's still totally wrong. Even leaving aside the fact the English doesn't have 2,000 prepositions, which I just glossed over :)

A five-word sentence with 1000 options per word isn't 5^1000 but only 1000^5 = 10^15. If we break the movie into 5-second blocks we get 48 of them in a 4-minute movie so (10^15)^48 = 10^720 different movies, which is not bad but we're still 10^513 away. There are a lot more variations we could consider - different actors, costumes, sets, framing, color grading etc. and I think it's plausible that we could come up with enough features. Heck if you talk twice as fast, you could get (10^15)^(48*2) = 10^1440. But it's a lot bigger than I made it out to be.

Obviously it would be AI-complete. I didn't know that term, that's what I meant by currently impossible to code. I just learned my favorite term ever, thanks for that!.

Although disappointing, you seem to have the correct answer for my question.

The difference between procedural generation and a video is similar to the difference between raster and vector graphics. Demoscene intros like this are more like your computer giving a live performance from scratch than playing a movie. Ideas like video compression don't really apply. They create 3D models and textures from simple math functions and filters, make a world from them, add more math functions for camera movements, and play some synthesized music that's more akin to MIDI than MP3 (to put it simply).

I recently began making a function that can output the 2D lines of the walls of a house, with windows and different shapes (L, S, T), and inside walls that are generated with points and NESW directions. It was pretty fun and challenging, but now I have to move to 3D to make this base line become a level with windows and door.

The only thing I have to give this function, is the height/width ratio, some other ratio that define how large "corner holes" are in the LST configuration, the amount and relative position of windows and door, the starting point and NSEW direction for inside walls, and with all that, I could create a house of a story building with an inside. Of course it's not finished yet, and there isn't furniture of details, but you see that in theory, you can use procedural generation as a compression tool for human-designed structures, that no machine learning algorithm or autoencoder could really achieve.

If you associate this kind of algorithm into a well made openstreetmap database (think vector tiles which are used for GPS software), you could also recreate the whole world in 3D, with enough details to make a game that would not require that much disk memory. Recreating the roads, fences, parks, rivers, vegetation, elevation etc is difficult because it require a lot of tuning and geometry tricks, but it's very cheap in term of cpu cycle and disk.

The folks at outerra have begun making an actual software that lets you browse the entire planet in 3D. You can zoom in real time from space to 1cm. They don't have cities yet though, but they are planning for it. I want to make a game using such ambitious ideas, but it's not easy...

Look up algorithmic information theory. To use a poor analogy, it is to procedural generation what information theory is to compression.

No, only very specific videos, like this particular one.

Well, it's not compressed, it's generated. You could generate an endless video with less code, but it would most likely be uninteresting. Scene demos are interesting because it's art and direction and music generated from algorithms rather than creating those things and compressing them efficiently.

But, yes, at some level there is an idea of a dna seed and a process to create something much more profound, we as humanity haven't come close to cracking that, though.

I suspect that if at all possible to have an algorithm that can generate the seeds plus the process to expand them, then that algorithm would take orders of magnitude longer to run then there would be practical in any meaningful time scale.

Not visuals, but along a similar vein, random number generators with high dimensionality and equidistribution can be coerced into generating very specific output, given enough exploration of the output space.

For example, and output of all zeros, or the source for a a random number generator itself, or a zipped archive of a work of Shakespeare.

It's fun to think about anyway.


But how many colours do we speak of now?

classic 4k, beautiful stuff. here's another one of my favourites from 5 years ago:




Beautiful soundtrack from Lassi Nikko (tpolm/orange) and crazy cool raymarching shader work from unc (quite)

shadertoy version with VR support:


I know a RasPi doesn't have near the specs needed to run this but I'd love to gut an old flat screen monitor and put it in a frame with a RasPi running something like this, generating random "art" and hang it on a wall somewhere...

you mean like this? https://www.electricobjects.com/

That's awesome! Thank you for that link, a little pricy but the screen would be nicer than what I have laying around and it's more plug-and-play, bookmarked!

Conspiracies 64kb demo from earlier this year is fantastic too.

http://www.pouet.net/prod.php?which=67106 / https://www.youtube.com/watch?v=qRVTI2y0BBg

Awesome stuff, yet it was only voted 2nd in the competition (Revision 2016 64kb compo). Here's the winner (fermi paradox by mercury):


The scenes with the moon being peppered with impacts (3:31) and what appears to be storms on Jupiter are beautiful.

As someone who was there: Darkness was impressive for sure, but not all that fun to watch, since it was mostly static dioramas, which the viewer can grok in a few seconds, and then it goes "oh and each one has a little jiggle too". Camera movement is minimal, and even when its there it doesn't expose anything new in the scene to the viewer. Additionally every scene is mostly faked with very obvious 2D shapes, meaning if the camera moved too much, the illusion would break. Lastly, the music was quite dull and not connected with what was on screen.

Fermi Paradox was also dioramas, but: Each one had had some engaging event happening, or at least well directed camera movements that exposed new visuals to the viewer over time or show-cased the full 3d nature of each scene. Additionally each scene had at least one well done graphics effect. The music was almost movie quality and in a few parts nicely synched up with the visuals.

How in the heck did they do that in 64kb

That is mind blowing.

Leaves quivering on a branch, realistic flames, realistic humans!

The water wasn't quite right but still.

Leaves quivering can be done with a simple sine that gets a value that is increased with a random delta with a different offset for each leave group (if you notice in games which have "wind", this is how it is done - the system exposes some knobs for designers to twist until things look good, but in general that is the idea).

The flames are very simple particles, the flame effect is probably the simplest particle effect you can do. The particle scene is blurred though before rendering the human figure, so the irregularities are not that visible.

The humans are simple black+white cutouts (they might be even stored as a single bit per pixel) with a small "light edge" on the side (most likely applied as an effect and not part of the data). It isn't very visible in the YouTube video because the compression smears a bit the scene, but if you run the demo locally you'll easily see it.

The intro isn't that high tech (compared to other intros of course), but Conspiracy are masters of presentation - the scene with the humans, leaves, mountains, house, etc wouldn't look as good as they do if it wasn't for the color grading, the camera motion and framing, the post processing and even the sequence the scenes are presented.

That was amazing! Many focus on a narrow presentation or style. That had all kinds of visual and mechanical elements to it that were done smoothly.

First one I saw and part of how I mentally judge the rest. Amazing work.

here's a great read about how it was made by one of the members: https://fgiesen.wordpress.com/2012/04/08/metaprogramming-for...

i've submitted it previously: https://news.ycombinator.com/item?id=7739599

If people wonder why I'm so obsessed with computers, this is the reason.

This was one of my favorite demos back in the day. I still have a copy of it (along with 10~ other favorites) sitting in a "Demoscene" folder somewhere. Many of them don't work on today's hardware/software, sadly (including this one).

It's great that it's open source now! That means if someone's really motivated, they can update it to run on modern environments (by no longer keeping it 4 KB), even OS X, etc.

And I had been planning to reverse engineer it for some time, but never got around to doing it.

Hats off to Inigo and others.

Am I the only one who's reminded of the Excel 97 easter egg?

Here's their release of how they made it as well. Super interesting read. It's shaders all the way down.


Ha, that's my exact comment from the related reddit thread. Kinda cool, I wonder how often that happens?


Spammers, presumably trying to build up karma on HN. Needless to say we've banned them and will continue to.

Watchful users can help out with this a lot. If you or anyone notice comments being cross-posted in the future, please alert us at hn@ycombinator.com.

And that indeed is what we have to look forward to in online forums over the next 5+ years, as machine learning cuts its teeth on places like reddit, HN & disqus, and spammers try to find new irritating ways of getting eyeballs.

I'm not a fan of quoting XKCD, but... https://xkcd.com/810/

With how many XKCD responses their are, it seems a straight forward ML experiment to auto-relevant-xkcd

Someone made something like that! https://www.reddit.com/user/relevantxkcd-bot

One of his better ones.

You do realize he copied your comment, don't you?

That's been happening a bit recently.


(I think there are others, but I can't find them now.)

Both usernames end in 2

Yeah, but I still wonder how often people copy comments around like that. It's not like that was some great writing or anything, but it was copied word for word.

Maybe it was some bot, but that's an odd bot.

I've seen it happen on YouTube (not to me); highly upvoted comments get copy/pasted a month or two later to "steal" upvotes for someone else.

It could be a bot tactic to build accounts with karma that can later be used for astroturfing, using proven comments from elsewhere. Maybe governments and organized astroturfers have automated that sort of thing more than we think.

Nail, meet hammer.

Hopefully in this case the nail is comment plagiarism and the hammer is more careful pattern matching in moderation systems.

Yep, happened to me in the Inferno thread yesterday.

Do you know if there is a video for these slides?

Complete with creative tensions:

> With cinematic look, like taken with a real camera. We had some dissagreements here

> • I absolutelly wanted to avoid the CG look - the danish part of the team wanted something sharp and shinny

> • I wanted a hand-held TV camera - they wanted a sts04 like smooth lovely cameras

> • I wanted a realistic scenery - they wanted more action in the scene..

And exploiting impressionism as a compression technique:

> The idea is NOT to render perfect snow, but to draw something that evoques snow, and let the viewer’s brain to trick the viewer.

His website is full of great resources: http://iquilezles.org/www/

Can someone TLDR how this stuff works?! I NEED TO DO IT!

That's unreal. On what kind of graphics hardware, though? Seems like it probably offloads most of the work on GPU whereas we'd have had to do most of it in software on HW weak enough that 4KB size actually mattered. And probably not achieve this demo.

>Seems like it probably offloads most of the work on GPU

It does just about everything on the GPU. All the CPU does is repeatedly render two triangles and play music: https://news.ycombinator.com/item?id=8070879

Here it has been adapted to WebGL: https://www.shadertoy.com/view/MdX3Rr

Edit: I'm wrong about the two triangles. From the .nfo-file:

  for those wondering, this a (too) low density flat mesh displaced with
  a procedural vertex shader. there arent any texturemaps for texturing,
  instead texturing (and shading) is defferred and computed procedurally
  in a full screen quad. this means there is zero overdraw for the quite
  expensive material at the cost of a single geometry pass. then another
  second full screen quad computes the motion blur. camera movements are 
  computed by a shader too and not in the cpu, as only the gpu knows the
  procedural definition of the landscape.

Thanks for detailed response. I figured it mostly did GPU stuff. So, real computing necessary here is a massively-parallel chip with generic and custom hardware with a bunch of memory plus a regular core using 4KB on other end. I think a more interesting challenge would be to force use of a subset of GPU functions or memory plus tiny memory on CPU side. I don't follow demoscene close enough to know if they subset GPU's like that. Idea being making them run closer to the old Voodoo or pre-GeForce GPU's to see just how much 2D or 3D performance once could squeeze out of it.

Tricks could have long-term benefit given any emerging FOSS GPU is more likely to be like one of the older ones given complexity of new ones. I'd clone one like SGI's Octane ones they used to do movies on with mere 200MHz processors. Meanwhile, similar tricks might let one squeeze more out of the existing, embedded GPU's in use. Maybe subset a PC GPU in demoscenes like one of the smartphone GPU's. Yeah, that's got some interesting potential.

You seem to think that GPU programming is somehow easy. You should try it and see what you think.

Yes, there is massive amount of power available but it's not easy to use effectively. You need a different mental model how things work, there's very little shared state and all the algorithms used have to match the model of computation.

Using the GPU almost exclusively, generating everything procedurally is a massive accomplishment and much more difficult than "normal" CPU+GPU programming or using just the CPU.

I do not share your view that this would be somehow less impressive because it uses the GPU.

I used to do GPU programming. Brief foray into it for game programming plus a then-new field called "GPGPU" pushing its limits. Think I implemented some crypto or physics stuff on one. I've followed some of the recent efforts.

My points of comparison are what they're doing vs what it's designed to do with what vs what other people do with that and other hardware. It looks great with lots of efficiency. I'll give them that. It's just way less impressive to me given they're using a powerful graphics card to mostly do what it's designed to do plus their innovation.

Pre "GPGPU" era of mostly fixed function 3d accelerators is hardly comparable to modern programmable GPUs.

> It's just way less impressive to me given they're using a powerful graphics card to mostly do what it's designed to do ...

This demo isn't at all what the GPU is "designed to do". The all-procedural graphics is way different from drawing artist-generated 3d models from memory while being orchestrated by the CPU. While it is more commonplace today, this demo was pioneering work in "all GPU" procedural graphics.

"Pre "GPGPU" era of mostly fixed function 3d accelerators is hardly comparable to modern programmable GPUs."

Which people used to do things they weren't designed for at all in so-called GPGPU work. The results defaulted on really, clever work. It's why I brought it up.

"The all-procedural graphics is way different from drawing artist-generated 3d models from memory while being orchestrated by the CPU. While it is more commonplace today, this demo was pioneering work in "all GPU" procedural graphics."

This is where I likely slipped up. I forgot how old this one was. I retract that claim then.

From the point of view of the hackers who programmed Spacewar on the PDP-1, the C64 is special purpose hardware with a powerful graphics card designed to make it trivial to implement Space Invaders.

>plus a regular core using 4KB on other end.

The .exe is 4K (it has been compressed using Crinkler), not the application's RAM requirements. The game .kkrieger for example is a 96K .exe, but uses several hundred MB of RAM when run.

Also, the strict size requirements can interfere with execution speed. From the .nfo again:

   believe it or not, this was running at 30 fps in a gefoce 7900 at some
   point, but size optimizations forced us to ask you for a pretty decent
   graphics card, like a geforce 8800gtx or hd4850. please, make sure you
   have d3d9_33.dll somewhere there. also, you only need windows xp.

Oh yeah, I forgot about that. I wonder what this one's runtime in RAM is. Regarding GPU quote, that's exactly the sort of thing I'm talking about. It's sort of a cheat where a massive amount of resources are used in one place to reduce a tiny amount in another. An impressive optimization requires little to no extra resources in B when optimizing A. There's some types that straight-up can't seem to have that tradeoff. Yet, the more constrained demo scenes were forced to figure out a bunch of them that worked.

So, I think there's potential for GPU subsets or CPU/GPU tradeoffs to make for interesting opportunities for people to show off brilliance.

>Regarding GPU quote, that's exactly the sort of thing I'm talking about. It's sort of a cheat where a massive amount of resources are used in one place to reduce a tiny amount in another.

Since the demo was originally entered in the 4K competition at the Breakpoint 2009 demo party, it had to run on the computer designated to run the competition's entries. So it's not like it could require an arbitrarily powerful GPU.

Although the compo computer was pretty beefy for the time: https://breakpoint.untergrund.net/2009/compos_pc.php

Fair enough. The spec requirements I'm mentioning would apply to people setting competition requirements more than the authors. The authors should of course work within the constraints for any particular competition. They can still try my challenge on the side.

" Intel Core2Quad at 2.66GHz, 2GB of RAM, with a NVidia GeForce 295 GTX with 2x896MB of VRAM. "

Double CPU and more GPU than what I'm writing this on but half the RAM. Beefy indeed. .exe size is still impressive and all given what they're doing.

That's the thing, though.... If this were an arbitrarily sized demo, I (and probably most people) would agree with you about the GPU stuff. But it's not arbitrarily sized, it's all in 4K. And it's from 2009.

4k + 300MB per other commenter. Lets not focus too much on one number without other.

Programming is all about finding and exploiting ways to cheat.

I remember overhearing a conversation in the Sun cafeteria about how the Aviator flight simulator only had one-half of a 3d model of the airplane, and it just reflected it to get the other half. They complained that was cheating, but that's just how it is!

Oh sure. It's one approach. We have to rate the cheats or honest work somehow. I think one way is to look at both what's produced with what type and number of resources are utilized. The constraints each provide plus what's achieved with them vs some baseline might be a good example. Baseline maybe determined after first round of submissions.

Btw, I'd probably have left off Space Invaders for exact reason you mentioned. Curious to know what you find to be most impressive demo on that system, though.

The 4kb restriction isn't there to make it run on weak hw, it's there to push people.

However it does irritate me that it's pregenerating the entire scene in memory. Being allowed to use 300MB of RAM doesn't strike me as very limiting.

You wanted optimized code size and optimized performance?

I mean, sure, but think about how big 4KB is, the tricks that are being used to create the scenes are crazy hacks using default Windows sound files and literally anything the executable can reference on the cheap.

Procedural content generation is really expensive (in general), but that's the beauty of it. You find a way to abstract the content into an algorithm, and then you can reduce the size of the assets, but you pretty much always need to pay the price somewhere.

But hey, I understand the sentiment, I wish Slack didn't consume 2 GB of RAM on my machine.

"Being allowed to use 300MB of RAM doesn't strike me as very limiting."

BOOM! I knew it was going to be huge. That's a beefy GPU + 300MB in RAM + pregenerating. I'd have... made sacrifices to have that even in the Half-Life 1 days. :)

I figured that. It's just that almost everythings done on the GPU for a rendering demo. That's really pushing people. ;)

The wink face makes it seem like you think this is easy because using a GPU to execute the program is allowed. No?

Edit: just read your other comment about real challenges in the C64 subset of the demoscene. That's like "You set a record in a 1600m race? For a real challenge, set a record in a marathon." It's just arbitrarily moving the totally legitimate goalposts to a different challenge because you prefer it.

How much harder would it be if software rendered or on a simple GPU from the late 90's? Their executable still 4KB? No?

>How much harder would it be if on a simple GPU from the late 90's

It would be impossible since pixel shaders didn't exist until the 2000's ;-)

As for software rendering: Since a pixel shader is essentially a program executed for every pixel, it's trivially portable to the CPU: Just turn it into a function and call it for every pixel on the screen. Making it fast is another matter altogether though.

Nitpicking: in the offline rendering world RenderMan had shaders ca. 1990[1], and graphics hackers got around to compiling those for research GPUs in the 90s too[2]. (Hardware had programmability equivalent to current shaders early as well [3], but no compilers for fancy shading languages)

[1] https://people.csail.mit.edu/ericchan/bib/pdf/p289-hanrahan.... [2] http://www.cs.unc.edu/~olano/papers/pxflshading.pdf [3] http://www.cs.unc.edu/~pxfl/papers/high-speed_rendering.pdf

That's some neat stuff. Especially PixelFlow. It had some clever, architectural decisions in terms of memory and computing primitives. Such schemes are already re-appearing in deep learning chips with old work like this maybe having some ideas waiting to be re-discovered.

"It would be impossible since pixel shaders didn't exist until the 2000's ;-)"

Lmao. You got me there.

"Just turn it into a function and call it for every pixel on the screen. Making it fast is another matter altogether though."

I was imagining it took up many MB of memory and massive cycles even on a multicore CPU. Suddenly, one faces tough decisions about organization, resolution, primitives, techniques used, algorithms, and so on. Gets really, really hard to make tiny and fast stuff without that GPU doing heavy lifting. :)

A soft renderer wouldn't fit in 4096 bytes, too. The overwhelming preference of the demoscene when doing PC filesize compos is to lean on OS provisions in order to free up space for more algorithms. Hence you have demos that use files in C:\Windows as source data. Likewise, you have demos for older computers that require aftermarket RAM upgrades and employ preprocessing techniques that require modern computing resources. In unrestricted compos modern game engines get employed these days, too, and while many of those entries suffer the downside of having a low entry bar, good work has been made too.

Pointing at the GPU as a particular cheat or a make-easy button is not relevant to the conversation, in this light. Having a Gravis Ultrasound was also a cheat back in the day ;) It's all fairly arbitrary stuff, and in the end, the point is to present something cool running on the hardware and within the nominal restrictions, even if you get tricky to do so.

"It's all fairly arbitrary stuff, and in the end, the point is to present something cool running on the hardware and within the nominal restrictions, even if you get tricky to do so."

Another good, detailed perspective on it. Appreciate it. I'll especially agree with the part I quoted. :)

Response to edit: more like they couldn't pull it off so they asked people to buy a better graphics card. That's in their own race. Then I pointed out doing graphics operations, mostly rendering, almost entirely on a graphics card designed for that was barely an accomplishment vs stuff like in C64 demoscene. .kkrieger had me way more impressed due to all the elements involved vs size. So, I suggested subsetting or constraining the graphics card so its hundreds of millions of transistors don't just hand people their victories. Plus allow more creativity.

The C64 subset of the demoscene is still going if you want it.

That's a real challenge. :) My comment to Kristine has some other details on how we might do something between that and a full GPU.

If we go that way, a real challenge would be designing your own computer then making a demo to run on it.

Just because the tools are more capable doesn't mean the challenge is any less real or the result less impressive.

http://www.linusakesson.net/scene/parallelogram/index.php Here is a demo from someone who has actually done just that. He's done a lot of other cool stuff too.

I have nothing but props for that one. A true, full-stack or whole-system coder in the way the term should be used. He's also about halfway to Frenchgeek's grand challenge. Maybe we need to take up funding for him to put it all on an ASIC at 0.35 micron.

I'm actually working toward that challenge. Well, that plus synthesis, verification, and analog tech to create it. Glad we agree on high end for a demo challenge. :P

This is actually 4KB (kilobytes). The title lead me to believe it was 4Kb (kilobits). Still impressive though.

Back in the day demos were more impressive imo. A lot of them now use direct x or opengl. For the most part the stuff you see isn't written by hand anymore AFAIK, they just have programs to generate the actual demo. Basically they use modeling programs.

There are all kinds of demos: some of them use models, some of them do not, some are technical feats, and some are artforms. Many are both.

Models are just serialized polygon meshes. We've been using models for demos for way longer than DX/OGL have existed. They're just another tool in the box which you can use (if you want to).

Using DirectX or OpenGL nowadays is like using the CPU: it's just part of the stack. They are probably lower level than you think: using DX/OGL isn't just doing something like drawModel(model, x, y, z), it's way lower level than that.

This demo in particular is not very different from old school demos. Back in the day we had interruptions, now we have API calls, but in the end shaders are just code. Elevated uses D3D to execute them in the GPU but that's all. And the synthesizer is apparently coded in ASM.

Even if you really miss the old school platforms there are still demos produced for them, often pushing the limits of what can be done.

Check http://www.pouet.net/ for lots of impressive demos.

Ah, so yes I appreciate demos which pushes hardware to the limit or create some new effect which hasn't been seen before. I feel most demos on older hardware falls within this category then newer demos.

Additionally, please correct me if I'm wrong, older demos didn't have nice graphics apis to call. They had to create and store what's given for free by Apis these days into their binary. I think the Amiga did have some 3d stuff?

With opengl with a handful of lines you could get a spinning cube with lighting. So much more work had to be put into older demos to get to the same point. On top of that, the demo writers really had to know the hardware well, and diving into undocumented behavior. A lot more was being calculated on the cpu back then too.

I haven't been on pouet.net in a while but I will look at newer demos.

While true for large demos, the 4kb — perhaps even 64kb? — are still written very much by hand.

I hope so. I'm not involved with the scene at all, but I did read a bunch of history about the scene and watched tons of demos.

I think farbrausch was one of the major groups who did this sort of thing https://github.com/farbrausch/fr_public

Lots of groups use demotools, including conspiracy for their 64k prods. Check out the moleman documentaries for lots of interesting scene info and lore! http://vakondok.com/index_en.html

Cool idea, too bad it's not free software. It's less free than any software I've seen (explicitly saying that you can't use it for "settings where security is critical" -- something that doesn't even make sense from a software license perspective). It's like the "Good not Evil" line in the JSON license.

Check out how many of your commercial software licenses forbid running them in nuclear power plants.

I use free software, so none. :D

Alternately, all of them :P

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact