Interesting how the obfuscated code is explained by slowly unobfuscating it step by step. This is the reverse of how obfuscated code is normally created: by starting with understandable code, and then slowly obfuscating it bit by bit (as I explained for this IOCCC submission [1]).
I say normally because one could also have a superoptimizer search for a minimal program that achieves some desired behaviour, and then one has to understand it by slowly unraveling the generated code.
I’ve used reverse Polish notation as an interview question many times. It works well because if someone’s never seen it you can learn a lot about their basic understanding of algorithms. But if they are aware of how easy it is you can extend it forever by adding symbols, improving the algo they build, or doing something like this.
Such is the result of delving into languages such as Forth[0].
Can you make programs with it smaller than assembly language? Sure.
Will you come out the other side a mad hatter speaking of things such as words, dictionaries, and washing machine firmware? Well, I can only speak for myself...
It would be interesting to see the performance difference from a wasm version, but in the end I found the human(ish) readable expression to be quite useful too.
Originally I created an interpreter for a code as a texture maker for code golfed javascripted games. https://github.com/Lerc/stackie
There's potential for a WASM implementation to be both smaller than the small version and Faster than the fast version.
yes, it has always been an influence for me, in fact 9 years ago I implemented a Forth interpreter in plain WAT[2] by de-obfuscating a IOCCC Forth implementation[3] and reimplementing it in Wasm and JS[4]
WASM is cool; I've started implemented a CPU that runs unmodified WASM in Verilog, but I'm finding the feature creep on the instruction set (SIMD, GC) to take away from the initial values behind WASM (simple, small)
You can ignore SIMD and GC (for now). SIMD explodes the complexity level of Wasm, esp when there is WebGPU. I am curious how you are handling layout and how you are handling all the irregular sizes.
Oh, I don't think so either, but if you think back to the asm.js times, there was a clear goal of "simple and higher perf", but now it's going in a direction for maximum compatibility with existing stacks (GC, WASI, etc) at "any" cost
I have never used Twitter so I might be mistaken but I believe the limit has been 280 for a while now, which is why the first one at 269 bytes would also have fit.
I don't think the Twitter choice of 140 was anything to do with this though and is just a coincidence. Back during dumbphones the only way to receive tweets while mobile was via the texting interface, and it would want to prepend the username. I don't think reserving 20 for the username has anything to do with how many bits are used to represent the alphabet.
That's coincidence, though. I used Twitter to keep in touch with friends via SMS in 02008, and the messages had space for a prelude to say who they were from. In the opposite direction, you could use that space to tell Twitter to send the message privately to someone.
The username length restriction might come partly from that. They could surely relax it by now, though. I saw it at play this week when @SecondGentleman (15 characters) changed to @SecondGent46.
I say normally because one could also have a superoptimizer search for a minimal program that achieves some desired behaviour, and then one has to understand it by slowly unraveling the generated code.
[1] https://tromp.github.io/maze.html