I love the demo scene, but the culture is kind of strange. Like I've been fooling around with the C64, and all the tools (emulators, debuggers, and etc) are hosted on source forge, or sometimes this random ftp. The author of the debugger I'm using proudly proclaims that he only works on it when he's fucked up on drugs after parties. The documentation of some tools has lewd ascii art, and etc etc.
I appreciate the anti-corporate vibe and at the same time wish they were a little more professional. Whatever you think, they're true 80s style cyberpunk hackers!
Well, of course. A “professional” does not (cannot) behave professionally 100% of the time, the idea is that when doing work that interacts with others, that’s when you’re more professional.
I never have that requirement for any community I want to take part in to be honest. If I don't like it, I am going to look elsewhere.
What I consider unprofessional for example is banning people from your service because a mob demands it. Paypal is an example of a service for financial transactions that banned users fallen out of good faith. Here professional distance would actually make sense.
Imagine IT services giving out personal information of people they don't like, it is completely unimaginable. I think this is a real danger for large social media sites given the behavior of companies.
That's down to you having a different set of bad experiences than others. There's room for both of those things to be wrong, I think over time you get a broader and hopefully more nuanced sense for what kind of behavior is unprofessional and not beneficial to the social order of a project.
"professional" is entirely about public image. If it's behind closed doors of course they no longer act professional. In some languages that value professionality there's even words built into the language. In Japanese there's the word "tatemae" 建て前 that translates roughly to "public position or attitude (as opposed to private thoughts)" that people hold while at work and they switch it off when leaving the work place/in private settings.
Being professional is entirely about being a good lier.
I recently explained to someone in an interview that I chose Puppet over Ansible circa 2014 because by default, Ansible used to output success or failure messages using cowsay.
As far as I am aware, the only known bypass of a particular iOS security mitigation has a note in it from the author notifying readers that they’ll only understand it after they “hit a blunt”. Said author also enjoys referencing Kim Jong Un in their work. It’s not just the demoscene that’s like this.
Probably developers sick of all programming being so sterile and corporate. You can't even publish a foss project without people jumping at you if its not suitable for a corporate environment.
Can you articulate what the specific benefits would be if the person providing obscure free tools via esoteric channels suddenly conformed to the behavioral norms your wishing for?
TIL: about the Demoscene[1], "an international computer art subculture focused on producing demos: self-contained, sometimes extremely small, computer programs that produce audio-visual presentations. The purpose of a demo is to show off programming, visual art, and musical skills. Demos and other demoscene productions are shared at festivals known as demoparties, voted on by those who attend, and released online."
I'm surprised someone on HN since 2012 would learn about the demoscene today! Then again, it's true that the demoscene "keeps to itself", it's always been quite the bubble. I'm not sure how that community could do some outreach, but I believe a lot more people would find it interesting.
I wish HN had a top nav link and a mechanism like it has for “Show HN” posts, but for demoscene stuff. The prefix one would use could be “Demoscene HN” or something, and the rule would be the same as for the “Show HN” prefix – that using it is “allowed” only when the submitted of the link was involved in making the thing :D
There is also an active demoscene for microcomputers of the 80s, such as the Commodore 64 and the ZX Spectrum. And also people coding new games for them.
I learned programming on the TRS80, and didn’t know about the demo scene until I was already too busy with other parts of my life; I think I first saw a demo on the Amiga.
I always feel like I missed out on something transcendental. I think I would have leaned a huge amount. Instead I had to settle with hacking NewDOS/80 binaries.
I have a huge amount of technical respect for the people who do this stuff.
I didn't know it existed until ~October(?) last year. My $lastEmployer (animation studio) had a demo day where in-house people would demo some personal projects they were working as well as exhibit entrants' works to the actual demo scene. It was insane, I would never dream of being able to produce most of the things I saw. If I remember correctly we watched demos from the 4K category to the 1 or 4M category. Procedural graphics, audio, emulation, it truly is an art form.
There were only two types of programming that intimidated me so much I wouldn't even try: people writing languages from scratch and the demo scene people. I just sat back and said "nope, I'll never get there."
It's never too late! Demoscene competitions happen all year round, around the globe. Some of the competitions are super focused and aren't a ton of work to do. The competitions (called "parties" in the scene) are a great way to spend a weekend with a bunch of like-minded nerds from many different disciplines: programming, graphics, music, ANSI/ASCII art, retrocomputing, etc.
That intro was very cool! Does anyone have pointers for a fullstack web dev who is interested in the demoscene (or computer generated art in general)?
I've actually played around a bit with rust [1] (I understand it probably isn't ideal for demoscence), but I haven't really done any graphics programming before. I get the feeling it is a pretty deep rabbit hole.
In this intro and in almost every modern 4kB intro, the graphics are made in a shader. To play with shaders, I'd recommend you to look at https://www.shadertoy.com/. Check also Inigo Quilez's website (https://www.iquilezles.org/), it has a lot of fascinating content.
If you're a web dev, then <canvas> may be the obvious option to get started! There's a fairly large creative coding community.
The Coding Train on youtube has a ton of content [0]. If you really want to start from the beginning then I would also recommend Coding Math on YouTube [1], which is sort of graphics from first principles using JS/canvas. That even starts with teaching trigonometry, vectors, etc. So if your math is rusty it's a good place to start
> but I haven't really done any graphics programming before
I've never written any demos but do enjoy playing with GPUs. Regarding anything and everything graphics related, try not to get overwhelmed by modern APIs; it's very easy to lose the forest for the trees. Shadertoy is a really good starting point.
My impression of the demoscene can be loosely summarized as extreme code golf for graphics and sound.
Kinda surprised about iterators not getting optimized so well. Even in higher-level languages, this is a commonly utilized optimization opportunity - e.g. the C# compiler will replace a foreach-loop over an array with an identical (but faster) for-loop.
There are tricks/an art to Rust's iterator semantics.
One example I know of is that when trying to calculate a dot product, you can improve the generated code and elide the bounds check only by a very particular syntax:
// Must slice to equal lengths, and then bounds checks are eliminated!
let len = cmp::min(xs.len(), ys.len());
let mut xs = &xs[..len];
let mut ys = &ys[..len];
let mut s = 0.;
for i in 0..xs.len() {
s += xs[i] * ys[i];
}
Compiler optimizers are full of surprises going both ways: good and bad. And details like this can probably change from version to version and get affected by compiling options. I would also expect that performance is higher priority than code size, generally, so missed opportunities like this might go unnoticed and unaddressed a long time if they don't show up on performance benchmarks.
The classic example of surprising optimization is perhaps matrix multiplication in Fortran:
At some point in history, when you wrote the straightforward, primitive three-nested-loops version of matrix multiplication your code ran much faster than when you tried to implement one of the more sophisticated algorithms.
That was because the Fortran compiler recognised the primitive version and replaced it with a special purpose highly optimized matrix multiplication. The compiler couldn't do that with the more sophisticated approaches.
> The classic example of surprising optimization is perhaps matrix multiplication in Fortran:
> At some point in history, when you wrote the straightforward, primitive three-nested-loops version of matrix multiplication your code ran much faster than when you tried to implement one of the more sophisticated algorithms.
It might also have been due to Fortran using column-major matrix layout; if you try to implement a sophisticated multiplication algorithm that is optimised for row-major matrices, then that will be slower (indeed, also the row-major three-nested-loops version is already quite a bit slower). Another thing is that the sophisticated algorithms usually have some constant overhead that makes them shine only on sufficiently large matrices.
This isn't a case of a missed optimization. The alternative loop they've used has different semantics: assumes 0 iterations never happen. It's equivalent of changing `while(){}` to `do {} while()`. It's not something an optimizer could safely do when the number of iterations isn't known.
Using the OS is allowed. As far as I know, one of the reasons why there are more Windows demos than for example Linux, is that Windows provides more graphics/sound APIs out-of-the-box while on Linux the OS itself does less you'd have to utilize libraries (which is kind of cheating when doing a 4k demo).
I think it could be summarized as "write a demo for the target platform as efficiently as possible". Doesn't necessarily need to be bare metal, there are JS demos that run in the browser as well. In that case the browser (with all its APIs) is the target platform.
The only way to achieve those kind of graphics in 4k is leveraging the Windows platform api. That is, unless you can drive a gpu and a soundcard from bare metal in 4k, while implementing the drivers and stuff for the hardware.
Most 4K demos on Pouet the last years has been Windows only. Demoscene parties usually specifies exactly what computer they will run the demos on in advance so groups can optimize for that setup. The code sometimes only run on that particular setup.
I really enjoyed the artistic part. I wonder if the author was trying to evoke a Martian atmosphere with the color choices. The contrast of the spheres with the columns was very nice as well...the perfect, beautiful spheres ascending into the sky...
I'm not sure if the author is reading this thread, but thanks! :-)
The only unsafe functionality the author describes is the unchecked array access. Based on that (which may be incomplete, of course), the borrow checker is a safety feature that is active.
GLSL shaders are usually either loaded from files or stored as strings in your programming language of choice. The OpenGL library or bindings will usually have some facility to compile those programs and then send them to the GPU, along with textures, geometry, etc.
I think the author called _mm_load_ps() on a pointer that isn’t 16-bit aligned. Either change that to _mm_loadu_ps() (which also works on unaligned pointers but incurs a performance penalty), or make sure your object is aligned when doing the heap allocation (there’s a STL function called aligned_alloc() in C++, probably there’s something similar in Rust)
Debug and release builds probably call slightly different sets of methods, so they might end up with slightly different heaps. That might be enough to cause a difference in offset between the builds.
Basically it has to run on a fairly clean Windows machine. The competition will usually publish the specifications of the machine ahead of time and what's on it. What you enter has to fit entirely within the bounds of the competition, but can use anything else on the machine if it's available. So you could enter say, a 4kb entry, and use a bunch of Windows .dlls for icons or something as artwork, but you couldn't enter a 4kb executable and 100MB of libraries and other stuff.
For some languages where this is really hard to achieve, they'll carve out a separate category for a language if they think there will be enough entries. e.g. a Java category where the entry fits with the space limit, but can use any other library in the default Java classpath, even though that's very very big. Likewise for different operating systems (Linux, MacOS competitions).
There's been a fantastic resurgence in retrohardware and a category called "old school/skool" for entries that have to run in MS-DOS -- which can be ultra hard as you have to code up a lot more of the runtime libraries like music, graphics rendering pipelines, etc.
Its only the output binary size that matters. If the libraries are in there then it counts. Where it gets a bit funny is graphics libraries and gpu features. You wouldn't include vulkan in your size but you get a bunch of features that weren't available on older hardware.
I appreciate the anti-corporate vibe and at the same time wish they were a little more professional. Whatever you think, they're true 80s style cyberpunk hackers!