Specifically, CC-BY-SA. Not all crative commons licenses are actually useful - all the no-derivative ones shouldn't be bundled under the same label and the non-commercial clauses just cause problems because it isn't clear what counts as a commercial use.
> In 1999, just days before the conclusion of a contract to sell his invention, Sloot died suddenly of a heart attack. The source code was never recovered, and the technique and claim have never been reproduced or verified.
Physic is not random. If time traveller is here now, then it was here forever, because to be here at time T, it must be here at time T-1 too. Even if we develop a time machine to change Universe at time T, then change will propagate in both T+1 and T-1 directions. Timetravelers are our brothers and sisters.
> If time traveller is here now, then it was here forever, because to be here at time T, it must be here at time T-1 too
If you believe in non-random deterministic universe then at T-1 (and always since big bang) traveler was already here in the shape of ancestors, just like at some point you existed in your mom and dad and before that in whatever big bang elements etc.
Who’s to say we’re living in the corrected branch? Commit fae12 doesn’t benefit from a patch being applied to its great grand parent and the history since then being rebased off of that.
In my most Hotep-y daydreams, I've sometimes wondered if this is the timeline where a group of scrappy, persecuted white dudes traveled back in time to make sure that Western civilization would become the primary world power (instead of, like, an expansionist ancient China or Egypt or Persia).
Yeah, it's exactly what you would expect someone like Card to write, but if you go into it knowing the views of its author, it almost becomes like a parody of his politics and becomes quite an enjoyable read on "I wonder how many crazy reasons he's going to bring up why Columbus was actually a hero".
My quick and dirty interpretation after skimming that article: he misrepresented a hashing algo to non-technical people who didn't understand that a) it's a one-way function and b) even if it weren't, multiple inputs can still map to the same hash.
> In his prototype, he faked his invention, which is why he refused to let anyone near it, and answered only in mystical vagueness to questions.
I was the happy attendant to a demo given to a wealthy friend who was asked to invest (alongside Pieper). I'll let Jan take the secret to his grave but the writer of TFA is spot on, he faked it, but he really did believe that he could make it work. It's a very sad story.
I met him a few times and after his death, I was contacted by a 'friend' (you never know; I just know for sure that he lived around the block from him, as did I at the time in that miserable town) of his who wanted to hire me to figure out the secret. They all thought it was real. But they missed the background to reason about it correctly, like Kolmogorov complexity. I don't think he was really seeing it as faking; he just thought he needed some more time to make it generally applicable, but the idea was basically a re-applied compression; you had 4 files; the original video, the compression exe, the decompression exe and the decompression data file. The compression would apply a (fairly basic) compression algorithm which was more or less of the type 'replace a pattern of x bytes by 1 byte'. That mapping was written to the decompression data file and repeated until the compressed video was very small; however, the decompression data file would be very large (similar, obviously, to the video(s) sizes together). His secret computer had a storage with the decompression data file and the idea was that he would, in time, find the ideal decompression data file (the Golden mapping or some such) that would be small-ish and yet would be able to compress 1000s+ of videos very efficiently. Which indeed would be enough, but it's not possible of course. To be clear; they believed they could ultimately have one few mb data file but with videos of 64kb by re-applying the encoding, hitting of some magic bag of mappings that would be found always repeating in very large files, thus making the compressed file smaller and smaller and smaller.
I don't know really how far he really got with this and no-body knows or ever will know. I would wager that IF they (the investors, people the investors hired etc) found that floppy disk, they would make it disappear due to the enormous embarrassment if that would leak out.
I am no specialist -- and although I understand and agree with the impossibility being referred to -- somehow it seems to me that AI models are "kind of" getting "closer" to this "golden decompression data file". Although AI models are not that huge, from a tiny human input (the "compressed data") they manage to "decompress" to data of mind-blowing quality, highly detailed and in amazing variety, while staying extremely coherent. These results are "inexact", sure (being exact is the aforementioned impossibility), but to the human perception they seem "perfect", which is good enough (for movies and other arts).
Yes, but the Sloot method was supposed to be loss-less. When we talk lossy, it gets trickier because then the definition of the expectation with % of loss and error % should be defined. I am sure we'll have AI's that can produce Terminator-ish in a bit; the thing is, it will be similar to you reminding the movie; it will be similar for the bigger plot, but a lot of details will be completely off/wrong. That's not the type of compression/encoding mr Sloot was talking about.
Edit: encryption was supposed to be encoding/compression.
By your definition the script and a list of actors should be counted as compression, but that's clearly not what this particular invention claimed to do. An AI model is more like a drawing-by-the-numbers game than a compression method. It creates something that looks superficially like the original but isn't the original.
Any "compression" mechanism that apparently violates Shanon's theorems would be "lossy" anyway, and lossy compression is essentially creating something that looks superficially like the original but isn't.
A script and a list of actors would take up 8k already (if not more), so yeah, an AI that can work on the prompt "take this script and make it like a Hollywood blockbuster" might be our best way to attempt to recreate this "compression" system with SoTA tech.
Sloot claimed his method was lossless, and it supposedly started out from a digital representation (without compression artifacts such as introduced by DCT or FT).
You are extremely close to having it all figured out. My then friend Hugo Krop[1] realized that something didn't add up but also missed the required background which is where I was brought in. I figured out how the demo was rigged and told him, that was the end of that. Interestingly: Pieper did go for it, and Pieper wasn't exactly dumb himself. I never really got that bit, he must have realized it was a scam. The demo was held in a building on the Sarphatikade in Amsterdam.
[1] Of 'TextLite' fame, deceased, very colorful, and later on a scammer in his own right.
Yeah that demo was somewhat legendary back then. But how you figured he did the demo? Because although I met Sloot, he never demo'd it to me and I never saw a live demo (not on video either; why are there no videos; Pieper took the machine somewhere once); his friend said he did demo it to him and he also told him, over time, how it worked, more or less. I remember him saying all the time that Sloot (and now this guy) talked about infinite compression like it was the most trivial thing in the world so I don't suppose they actually thought that was any secret.
What I find very strange about the Pieper part (who I also met through a company (client) he advised with that investor vehicle he had of which I don't think any companies made it) is not that he fell for it; unlike what others say, he didn't appear very clever to me, at least not in anything tech, maybe business, although... He seemed like a blaaskaak when I met him; arrogant as hell and not much substance, but maybe that was his spiel for the ceo of the company he invested in. Anyway; what I find strange is not he fell for it but that his Philips tech colleagues, who saw the 'invention' multiple times, didn't have the same feel as you and Krop? And then warned him and said 'you must be insane to believe this boss' , or something. Not like Dutch people would hold back even if he was the boss.
Jan left a wife and four kids behind and I think Jan was effectively not the engine behind the scam, so I'm not going to put any of the rest of the story online. But if you want we can take this offline, email in profile.
As for Pieper: there is a reason why his investment vehicle (I take it you are referring to Twinning but there were others as well) did not do well.
Gah you really made me go on memory lane there, I've been thinking all day about all those people and what happened to them. Quite a few of them have died, some did really well, some went to jail, some have evaporated into thin air. It's a kaleidoscope.
I've been trying to place the exact date of the demo and I suspect it was one of the first he ever did to 'outsiders'.
The little story about the alien at the beginning was interesting. It might be a way of rephrasing information theory as a limit on “measure-ability”.
I remember in high school physics realizing that an in-elastic rod would not be possible, because it would allow faster than light communication. There’s probably something to that effect that already exists that I don’t know about regarding information theory, and how you can’t store more information than is allowed by converting the problem into one of measurement rather than compression.
It might even say something about how small matter is allowed to be.
If you increase the number of sticks the alien is allowed to have, then his task becomes significantly easier. So the question could be rephrased as “what are the fewest sticks the alien could use to complete his task of encoding n bits representing m books”.
Fewer sticks than that would violate this law of measurement (I don’t know if that actually exists but it seems like it) and more than that is wasteful.
At any rate, for each bit of information you’re required to measure with one order of magnitude of precision better, so it’s clearly impossible.
You're on the right track, especially with the last sentence.
If reality had an infinite amount of detail (i.e. matter could be arbitrarily small), we could make storage media as dense as we liked by encoding ones and zeros as the presence or absence of tiny bits of matter.
The alien's stick is a version of this, albeit an exponentially inefficient one restricted to codewords of the form 1111...0000.
In practice, if atoms are about N orders of magnitude smaller than macroscopic objects, we can fit (very roughly) 10^N bits of information in an object, and the alien's method can only fit, as you said, roughly N bits.
Of course, existing storage methods are somewhere in between, because 1 gram of storage media can hold way more than 23 bits but way less than 10^23.
(I'm handwaving past some important distinctions, like the distinction between the size of atoms and the level of detail in the physical world. In classical Newtonian physics, things can be made of particles but the particles can have perfectly continuous positions, so that there's still no ultimate limit on measurement detail. Quantum physics changes this -- although this gets complicated because of the holographic principle; many physicists think the ultimate information limit grows like the 2/3 power of volume, instead of linearly...)
> many physicists think the ultimate information limit grows like the 2/3 power of volume, instead of linearly
I may have misunderstood and I'm clearly not up to speed on the literature, but even at an intuitive level, wouldn't this violate other principles?
If this is how information entropy scaled, either we could "work around" it by having more but smaller storage entities adding to the same volume, which violates this theory directly because then when would it ever actually apply; or it somehow enforces the limit over any given volume regardless, and therefore the entire universe's volume (which isn't even finite?) somehow sets and tracks a global limit because anything smaller would be a workaround. Neither makes sense to me.
Now if we're saying that the simulation running our universe has limitations that make this true in practice in some sense that we can measure but can't work around, I will need an explanation as to why we're not totally freaking out right now. There's navel gazing philosophy and then there's shit like this which could mean we discover a wrongwarp to the end credits within this century.
Interestingly, according to Wikipedia¹, Pieper was not a professor of CS, as described in the article; instead, he taught "business administration and corporate governance", which can be compatible with his lack of understanding of the topic (nonetheless, this is a giant gap for somebody with a degree in CS).
Watching YouTube videos of demos feels like cheating but happy they exist as I'm a bit more reluctant to download & run random exes than I was 20 years ago! Before the videos were available you had to wait quite a bit whilst it unpacked and processed before you were either dazzled or disappointed. We used to watch quite a lot of demos as part of our chill out sessions as they were the perfect accompaniment to our mental state.
True, but it heightens the moral horror of the situation- but really, the morality of treating an AGI like that is arguably the same, or at least similar.
I know the tech behind it is pretty different, but this reminds me of .kkrieger from back in the day. An entire 3d FPS, compressed down to 96k. It was pretty neat.
After watching a lot of demoscene stuff and reading about how it's done, you start understanding the limits, and things like the sheep (and especially those conical sections for legs) become obvious how they can be expressed as very compact equations and similarly animated. However, it's interesting that AFAIK the majority of these demos rely on the GPU and its powerful 3D acceleration capabilities, but 2D (Japanese)anime-style demos seem to be rare and nonexistent in the smaller sizes. Is 3D animation actually easier?
As a side-note, "mouton" is the French word for "sheep", and thus "mutton".
I think the way you described it explains the prevalence of 3D for demos. Geometric shapes in 3D can be described in closed form equations requiring a minimum of storage, however, the only ‘mathematical’ way to store 2D style animation is as SVG curves for outline and then a space filling algorithm for the coloring of those areas. The curves are, I am making a back of the envelope guess, going to require as much storage for a single arc/line as the description of a geometric volume in total. Then there is the issue of storing the animation of those curves which is going to require even more space, compared to the relatively small transformation matrix for the 3D volume. I would also guess the complexity of the rendering algorithm would increase (both in actual algorithmic and space complexity).
— Caveat: one could argue that storing a series of bitmaps and then playing them back like a flip book could be ‘mathematical’, especially if some procedural uncompressing algorithm was used to generate full frames from on some change differential, but I don’t think that exists and the space requirement would be huge compared to 3D volumetric descriptions.
I don't see why interesting 2D animation could not be made using closed forms. A rectangle takes 2 parameters for shape, and 2 for position; 3 if you rotate it. Similarly for an ellipse. Realistically you also need a Z-index. The soft-min function mentioned in the post would allow to merge 2D shapes the same way as 3D shapes, at the expense of one additional parameter.
An approach similar to the signed depth field and ray marching can be used to determine boundaries, and thus the kind of painting inside. This would require to start a scan line at a position which is guaranteed to be outside of any of the shapes; this must be easy. Texturing would be harder, but, knowing the position inside the shape relative to its center, it would be possible to procedurally generate nice gradients, regular textures like bricks or scales, or noisy textures like fur. Using the same trick with calculating the gradient would allow to create nice thick / styled outlines.
Doing this on a CPU would, of course, be pretty slow, so this would need to be written as a bunch of shaders somehow. I don't see why it won't work though: each shader could take one scanline, they would share the same geometric model, and shaders are good at doing a lot of FPU math in parallel.
It's a bit more than 8kB, but if you like unexpectedly dark animated movies about sheep I recommend the Blender Foundation's open movie Cosmos Laundromat[0]. Their movies always seem to be weirdly dark for tech demos for some reason. I thought this one in particular came out really well.
Be forewarned: the movie ends with a “to be continued” and the wikipedia page reads:
> The film was originally intended to kickstart a feature-length film. A short film sequel was written and designed but never brought to production. In 2020, [the producer] announced that the one film would be the total of the project.
So if you don’t feel like having the disappointment of an intriguing concept that you’ll never find the resolution for, maybe skip it.
Well, it's still a cool video. If you (the same "you" as in my parent comment) can't handle a 12 minute animation about a sheep never reaching its final resolution, maybe you need the practice. ;)
Oh, come on, what a ridiculous comparison. As if deciding to skip a 10 minute short would reflect in any way on one’s life decisions. This website, man. No wonder the wider internet makes fun of Hacker News. For crying out loud.
On a random related note, courtesy of Netflix, the entire movie is available for download from S3 as a series of 18,192 uncompressed EXR images[1], which came in handy when I was experimenting with HDR in DaVinci Resolve.
There's a commit (https://github.com/ctrl-alt-test/mouton/commit/79d2d1eab7a22...) where we save many bytes by removing a performance optimization. We originally wanted to keep it, but we realized we were short on bytes and that optimization was not required on recent-ish GPUs.
The image of the post is 38.8 KB in size. It's hard to imagine how this animation fits easily 4 times in that image, that is, including the renderer and the sound engine.
Think about this that way: how much a screenshot of this post would take vs how much it takes in its textually described form?
Yes the demo is still impressive, but the fundamental behind the wonderful work are clear and can be summarized in an approachable explanation as the post admirably do.
There's a whole genre in demoscene dedicated to 4kb productions. Most popular example till the current date is Elevated (https://www.youtube.com/watch?v=eGdUDGo2Gxw), which in an entirely different way, provides also an incredibly good cinematic feeling. Made by wizards it must be.
Reminds me of the assignment for a 3rd Compouter Science unit. We had to create a short movie using only c++, and then our last lecture of the year was a movie showing of all of them. Most were very similar to this in terms of style and simplicity. It was definitely the longest amount of time I spent on any assingment!
I kind of like these things, but (unless I missed it!) they don't seem to specify their constraints. So clearly the graphics wouldn't fit in 8kB, so what does that mean?
It could be a cool SDL competition where you allow a version and a set of assets and then let slip the dogs of war.
For PC intros the rules are generally that your 8kB (or whatever size) executable has to run alone, with no other files, on a bone stock install of Windows with no internet access. That means that yes, the graphics and sound are all generated on the fly by the 8kB exe.
But stock version of Windows has a ton of stuff you can use. All sorts of graphics and audio files. And that again depends on which version you're basing it on.
I think it would be cool to run a competition with more specific (and platform independent) set of constraints. I guess I should spend more time thinking about how to organise it myself than complaining that no-one else has!
It's usually forbidden to rely on those files as they can disappear with updates of windows. The best example I can think of is General MIDI. The files were avaialble with XP and below, and is now often explicitly forbidden because it's not available anymore or not in the same form making demos incompatible.
It's also often forbidden to use the filename to store data. There was that case of the 256B demo that relied on a deep hierarchy of directories to work :)
Indeed, the rules for the Revision party where this was released require the intro to run on Windows 10 (so implicitly no MIDI) and specify that the sample music that comes with Windows will be deleted.
In reality, the multi-gigabyte OS everyone seems to complain about when a sizecoding demoscene production is shown is mostly there as a compatibility layer. You simply can't use modern GPUs without it. One could conceivably do all that on the bare metal with not that much difference in size, the problem is that there is no powerful enough bare metal platform where you can do that.
If you prefer to work closer to the hardware, that's what the "oldschool" and "wild" categories are for. But these are more about overcoming the limited abilities of these platforms than pure sizecoding.
There are competitions for all sorts of retro computer and game console platforms which are "purer" in the sense that there's little to no operating system at all, so the demo has to be programmed against the bare metal. That's not feasible for modern PCs though, you need the OS infrastructure and drivers to abstract over the variety of hardware.
Or if you're web-inclined you could use the browser as your OS:
Has been done and discussed (flamed) ages ago, in the early Windows demos era. An operating system has a lot of “free” data, graphical or otherwise (from wallpapers to icon resources), which can be used directly or processed into something else. The code to access them cam become too big, but hacks are possible. There's also a question of fonts. Is it OK to use them to “just put some text on screen”, is it OK to use them as a source of all kinds of curves? It is hard to define where “the program” ends, and the rest of the system starts. After all, they all need to load libraries to interact with GPU into their process space. Those libraries have debug functions, example data, and other junk which is not used to put pictures on screen, but is available. Then what if a tiny application links to 50 system libraries only to have a database of potentially suitable data sequences here and there?
And long before that, even regular software on microcomputers struggling to save each byte relied on known values being in known locations in ROM. It worked, because each device came from the factory with the same firmware, and it could never ever be changed for the reason I've just mentioned.
The solution turns out to be pretty simple: if you think you're smart, other smart people will study your trick, and decide whether it's impressive, or just a one-time joke. Formal rules for demoparties don't mention many possible “size extending techniques” because it's generally accepted that they won't help much compared to what can be done in the same amount of bytes directly by a competent author.
The result is an 8kB Windows executable file. It is self-contained: there's only one .exe file, that generates everything (without resource files). You can download it and execute it (it requires a relatively recent GPU).
I'll see how I can edit the text to make this more obvious.
If you look in the download link they give [1], the zip file contains multiple different 8kB exe files, for different resolutions. So it seems the target is executable size.
Very nice. I'm currently playing around with pygame, and trying to make vfx in it. Trying to mimic a bomb explosion with red particles coming off, and smoke.
So curious how these animations are done from scratch with just circle and line primitives.
On Chrome, the download works better when I right-click and select "Save link as". Chrome mentions that it's not secure because it's http. I've just uploaded the file here, maybe it will work better: https://ctrl-alt-test.fr/dl/The_Sheep_and_the_Flower.zip
If you open the zip file, you'll see multiple files. The biggest (200kB) is a screenshot for reference. It's not used, you can delete it. We didn't include a resolution selector in the executable file; instead we provided one binary for each resolution (e.g. The_sheep_and_the_flower-1920x1080.exe).
It’s a kind of ironic beauty to minify this project all the way down to 8kb for example by developing a minifying source-to-source compiler, and then using a parameter-by-copied-executable scheme for the screen resolution.
I was curious of that too. Technically it should be possible to read the executable file name (via `GetModuleFileName`, because you don't necessarily have `argv`) and pick the resolution accordingly. But that would take at least 30 more bytes in my wild guess...
For the justification: it's the standard approach when doing 4kB intros, we just copied it. At the end, we had ~30 spare bytes, so we could have looked for an alternative.
Yes I downloaded the ZIP, here I put it on my website: https://www.zorinaq.com/pub/The_Sheep_and_the_Flower.zip (SHA256: 91327f463ff5edaae89e1e6fd386f313c33d1f171c84f9e843e263af3d034321) After extracting it contains these files (the zip archive is large because it contains a ~200 kB JPEG screenshot):
I downloaded it just fine using Firefox. However, when I unzipped it, and then opened the resulting folder, there were no .exe files inside. A moment later, Norton reassured me that it had automatically deleted the malicious .exe files, and I was safe.
So I disabled Norton, and unzipped the file again. This time there were four .exe files, each 8kb in size, for running the movie in various resolutions.
I double-clicked one of them. It immediately reduced the resolution of my displays, and moved/rearranged all of my open windows, putting them all onto the right-hand display. I heard music, but saw no video, and panicked and hit Alt-F4 to stop it. My display resolutions were restored, but I had to manually put all my windows back where they belonged.
Changing the resolution is expected; there's one executable file for each supported resolution. The program doesn't move or rearrange any other window, but Windows might do it when switching resolution.
I don't know why the graphics didn't start. Maybe the shader compilation failed, or some other issue with the drivers (or the GPU is too old).
The source code is provided, so anyone can check the source and rebuild the binary file. The executable has been tested on multiple machines (and was presented at a demoparty, so it had to run on the compo machine).
Very impressive to see the remake fit fit in 8kB considering the original is MBs of Blender, SVG and Audio files
Yay, the Creative Common license was actually useful! I wish the authors had actually used the same hedgehog character and audio melody.