What exactly makes them impressive depends on the category, e.g. a limited hardware platform like some 80s computer, or limited space. A popular category is "4k", meaning that the program has to fit into 4096 bytes. 4k demos can actually get quite visually complex and even feature music.
Many modern games that let you explore gigantic, seemingly infinite spaces (e.g. Minecraft, No Man's Sky) owe a lot to the ideas that originated in the Demoscene.
What I do not really understand: Are all these functions (for example sdfCone()) generic modelling terminology or are they specific to a certain tool?
Then all you need is some function that does raymarching and a bunch of other tricks to do shading. It's all part of graphics processing, so there is definitely some common terminology there.
For some SDFs you could start with IQ's page: https://www.iquilezles.org/www/articles/distfunctions/distfu...
float coneSDF( vec3 p, vec2 c )
float q = length(p.xz);
> Quilez describes the method he used to generate the 15 types of foliage, rocks, and even small flies as “painting with code.” The code exists as a Pixar RenderMan plug-in written in C++; that is, a DSO. When Quilez started, he rendered with PRMan 15, then, as time passed, changed to Version 16.
> “It was quite fun using a compiler to produce assets,” Quilez says. “Going from the flat world into a super-dense 3D world was all my work. Moss with small clover leaves around it, bracken, hummocks, hanging moss, all the leaves and pine needles, lichen, grass and flowers, heather, birch trees, gorse, Scotch broom, the distant trees and rocks, the small dots that were flies, all were specific pieces of code; all the shapes, the colors, everything is in the code. We didn’t write a tool that an artist would use; there’s no user interface. Usually we use code to glue things together. In this case, we thought of code as assets.”
At first, Quilez planned to hard-code only moss and grass, but the result was so successful he ended up writing specific code for many more types of vegetation. “I abstracted the code and found the parts they all had in common, but in principle, each is different,” he says. “They share the logic, of course.”
> He treated the vegetation that grew on the rocks, trees, and up from the ground differently from that without supporting geometry. For the former: “We’d start with 3D models and go polygon by polygon in the mesh,” Quilez explains. “For every quad, we would generate random points, and from those points we would grow flowers, leaves, and something else.”
> For the latter: “When we didn’t have a mesh,” Quilez explains, “we’d place cubes where we wanted things to grow. These weren’t polygons; they were mathematical descriptions: This is the center, these are the sides, end of story. The code would use that to generate detail inside. We generated bushes out of nowhere.”
Quilez didn’t use typical plant-growing rules to produce the grass, moss, and other vegetation. “The problem with L-systems and other old-school techniques is that you have to encode the rules,” he says. “If you want to change something, you have to change the rules, which isn’t intuitive. When we wanted to change something, we’d go into the code and make the change.”
His ShaderToy profile is quite something as well: https://www.shadertoy.com/view/4ttSWf