> Vyxal is an stack-based esoteric array language that is dedicated to dominating competition in code golf challenges. This means that it strips away all need for boilerplate, long function names and impractical source layouts.
> Functions are a core part of Vyxal. They are first class objects, meaning that you can have functions on the stack, functions can take functions as arguments, and functions can return functions.
This is actually all you need for writing very compact code.
If you were to strip away all other features of Vyxal, namely variables, stacks, numbers, strings, lists, and all control flow, and have only functions from functions to functions, plus some way to use these for input and output, then you have Binary Lambda Calculus [1].
The Vyxal code for the infinite list of Fibonacci numbers is 13 bytes long.
In comparison, the BLC code is only 78 bits, or just under 10 bytes: 010101000110100000000001011011001010111110111101110000111110011110100000100010.
Hmm, that doesn't seem fair. At that point, why not encode a huge library of programming challenges into the language spec to beat all the challenges with just a couple of chars? Oh well, I suppose it's not really a competition in the usual sense so it doesn't really matter. And code golf language comparisons are kind of apples-to-oranges huh.
What about getting a large set of programming challenges as input and finding an esoteric standard library using some optimization algorithm, when combined, can solve any challenge in minimal program size
Such standard library could contain weird operations that, while not specifically solving a single program, would each be pretty hard to grasp in order to use to do something useful
It computes a list (as nested pairs in the standard lambda calculus representation) of Church numerals (the standard representation of natural numbers). See also the discussion of the 167-bit primes program output in the first link, which gives an impression of the overhead needed for output in decimals.
Huh,
once you add a compiler step to turn normal code into "golfed" code, it feels like a different thing, with the goal being to write a compact bytecode format.
This one seems mostly like a workaround for having to type weird Unicode characters, so it still seem "fair" to me whatever that means.
But I wonder how much a normal java program can be auto golfed by playing with the bytecode output.
In my golf language, called stax, each program expressible in plain ASCII has a corresponding representation in a single byte character set. [0] The only reason a program wouldn't start in plain ASCII is the contents of string literals.
The alternate representation generally saves ~15%, as measured in bytes. However, I totally understand the argument about human-unreadable "golfed" code, which is why you can always use the plain ASCII representation if you want.
For instance, you can print fibonacci numbers in an infinite loop using this program. [1]
10Wb+Q
It pushes one and zero to the stack. Then `W` repeats the rest of the program forever. `b+` copies the top 2 elements from the stack and adds them. `Q` peeks from the stack and prints to output. This program packs down to `╪Ç►╢0` for a savings of 1 byte (using the languages own impractical character encoding).
Isn't the whole purpose of golfing to write compact in higher level languages? If writing the lower level is too hard, it's no different to jars, pyc, or x86 files.
Yeah, totally. I guess really the way to think about it is that any code golfing problem is a separate competition between each language. What's the shortest you can get in JS, in Java, in Pyth etc.
Languages made just for golfing, and how many of them there are, seem a little silly, but theyre cute to look at and fun to make i'm sure. Its art :p Not complaining.
I agree. I used to frequent codegolf.stackexchange.com but nowadays it's just a bunch of 2 character solutions in 50 different home-grown golfing languages.
Golfing languages can be fun, but they do feel like something totally different than trying to make a 200 line C program run in 200 bytes.
There are still different approaches within the same language, and having a broad consensus about which constants and abbreviations are part of the standard library puts more competitive focus on the algorithm instead of the encoding.
I think the size of the interpreter for Vyxal should be included in the scoring, if we're going to be really consistent about this.
On the other hand, if you're going to allow plaintext to be compressed into bytecode, why not include a Huffman encoder as part of the flow? The FizzBuzz example has a string splitter in it (apparently, I haven't read ALL the docs), just to get around quoting some strings, but quotes might be less than a byte if you do them enough, etc.
Lastly, who can make the smallest Vyxal executable? Let's use Jart's excellent Acutally Portable Executable[1] format as a kicking off point. That seems like the best way to make it work for almost anyone. If not, then maybe DOS/Windows/Arm/VAX/PDP-11/Z80 categories might be needed?
A Vyxal compliance suite of tests is required to enable all of the above. Who can make it the smallest? ;-)
> I think the size of the interpreter for Vyxal should be included in the scoring, if we're going to be really consistent about this.
Code Golf categories are arbitrary anyways, and can be as strict or as permissible as the golf course designer likes. If you need the most raw representation of code size, how about a code golf category where you have to design the smallest physical computer that can perform a certain task?
> Who can make it the smallest?
I noticed that in Vyxel 2 the language allows you to grab arbitrary Python modules and make calls, but this doesn't seem to be available anymore in Vyxel 3. So I'm thinking you can make a smaller Vyxel 3 implementation than 2. Either way, both reference implementations are fairly large runtimes.
> assembly on a specific machine, which seems very fair to me.
It seems a little biased toward imperative languages. The earliest models of computation, namely combinatory logic with S & K, and the lambda calculus, date back to 1920s, and being exceedingly simple, can easily be implemented in any modern functional language. But not so easily in an imperative one, which have no support for closures.
On the other hand, implementing an imperative language in a functional one is relatively straightforward.
Any defined runtime seems fair, though a real computer will always seem more canonical.
The two parts that seem unfair to me:
1. There's hidden bits of information based on choosing one language over another that can't be counted. Imagine a family of 26 languages that are the same except the first letter is different etc.
2. Some of the golfing is in how good the language you picked is in their bytecode and not how clever your solution is
Standardizing on one format would fix those, but you would lose the cuteness compared to demoscene of turning on your phyaical computer and trying it out, and you would lose compared to current code golf the fun of finding and playing with new cool languages
Or I guess just treat each language as a separate category and not compare between them
My motivation in defining BLC was to have the simplest measure of the complexity of things. That includes binary strings, natural numbers, tuples, lists, sets, and functions of those, as well as languages (according to their interpreter) and formal systems (according to their theorem enumerator) [1].
BP is like a slightly messier BrainFuck. Its alphabet size of 17 is rather odd. Its spec (Section E.2 of [1]) says
> programs containing unbalanced parentheses are made valid, in particular by skipping any additional ].
But that still leaves programs invalid programs like "[[[[".
From a theoretical viewpoint, its biggest flaw is the lack of an input tape, which makes it a non-optimal description method in the sense of [2]. In particular, while BLC programs are at most a constant larger than BP ones, the reverse is not true.
The version of BF that I interpret in [3] is still optimally universal, because its program delimiting fist unbalanced ] is followed by its binary input.
> Vyxal aims to bridge the gap between simplicity and "golfability".
With code golfing languages, there are inherent tradeoffs between code size and usability/fun. IMO the most important features for minimizing length (assuming the only rule is the interpreter must be published before the challenge) are:
* (in Vyxal) Efficient syntax. Not sure what state of the art is anymore but stack-based seems reasonable.
* (in Vyxal) String compression
* (not in Vyxal) Efficient encoding; Huffman coding at a minimum but ideally arithmetic coding using sophisticated machine learning to predict the next command. It's super inefficient to have each command be 1 or 2 bytes regardless of frequency.
* (not in Vyxal) Huge numbers of builtins; Vyxal has "only" ~560. Ideally every past code golf question and every OEIS sequence are their own builtin.
Vyxal might hit a sweet spot, but I'm skeptical that it actually score as well as other languages with more of these features.
Since version 3 was rewritten from Python to Scala, does that mean it drops support for Python FFI? Does it replace it with Java FFI?
Python FFI in Vyxal was neat because you could trivially import python libraries like sockets, or start doing FFI to C libraries with calls to Python's ctypes. It was a mess but it's satisfying just how many different cross-cutting concerns you could glue together with very little code, when most other esolangs don't provide much of a standard library.
> Functions are a core part of Vyxal. They are first class objects, meaning that you can have functions on the stack, functions can take functions as arguments, and functions can return functions.
This is actually all you need for writing very compact code. If you were to strip away all other features of Vyxal, namely variables, stacks, numbers, strings, lists, and all control flow, and have only functions from functions to functions, plus some way to use these for input and output, then you have Binary Lambda Calculus [1].
The Vyxal code for the infinite list of Fibonacci numbers is 13 bytes long. In comparison, the BLC code is only 78 bits, or just under 10 bytes: 010101000110100000000001011011001010111110111101110000111110011110100000100010.
Or graphically [2]:
[1] https://www.ioccc.org/2012/tromp/hint.html[2] https://tromp.github.io/cl/diagrams.html