It's the fact the author types like they're being charged by the character and taxed for whitespace.
I dunno about you, but I have very little working memory in my head, just a few items, and only one focus of attention with a narrow field of view.
I can’t read that style of code, but opening a traditional codebase with part of the code here using lots of symbols to achieve a tiny amount of work and the next piece of work is four screens away, using a lot of symbols to achieve a small amount of work, it’s like declaring legalese as the most readable kind of English because it bloats so big it must be readable.
judging by the only standard that matters, readability, this is not a good coding style
What about composable idioms and conventions which let you do a lot of data transforms with a little attention and time?
What about a small amount of code to maintain and refactor? If this style saved you ten lines it might not help but where’s the cutoff that you’d start to be interested - if it saved you 50% of the work? 75%? 99.999999%?
If “big count per line of code is constant” this ought to be an exceptionally low bug style, right?
What about having less code to demonstrate correctness of?
What about having little code to throw away and rewrite if you don’t like it, leasing to a faster refactoring cycle time and a better expression of your intent after s set time? Or leading to readability simply being less important because the cost of rewriting 50 lines from scratch is so much lower than rewriting 10,000 lines from scratch?
Or if you’re writing documentation and there’s an order of magnitude less code to write about?
Or if you should write as many lines of test code as real code, and there’s an order of magnitude less code to test?
The whole style isn’t just “golf the character count”, but “collapse down the size of everything so one human attention covers a lot more of it, like zooming out to get a higher overview”.
Crossing down from several files to one file is a phase transition, crossing from one long file to one screen, is.
> I dunno about you, but I have very little working memory in my head, just a few items, and only one focus of attention with a narrow field of view.
Right. Which is why readable, mnemonic names are so important. Single-character names are good for array indexing, because their scope is so small you can see the whole region they're used in a fraction of a screen (usually) but variables which get used over a larger fraction of the codebase need names which explain what and why they are.
> What about composable idioms and conventions which let you do a lot of data transforms with a little attention and time?
Great. That doesn't preclude meaningful variable names.
> What about a small amount of code to maintain and refactor? If this style saved you ten lines it might not help but where’s the cutoff that you’d start to be interested - if it saved you 50% of the work? 75%? 99.999999%?
Great. That doesn't preclude meaningful variable names.
> If “big count per line of code is constant” this ought to be an exceptionally low bug style, right?
If that's true to an unlimited extent, IOCCC programs should be nearly self-debugging, with how short they are.
> What about having less code to demonstrate correctness of?
Testability is about each bit of code doing one thing, not doing ten things in one line. You can test one thing. If you try to test ten things at once, you have 1024 possible ways for that test to come out, assuming Pass/Fail is completely binary, as opposed to real testing, which is more like Pass/Fail/WTF.
> What about having little code to throw away and rewrite if you don’t like it, leasing to a faster refactoring cycle time and a better expression of your intent after s set time?
Throwing code away doesn't hurt my feelings.
> Or leading to readability simply being less important because the cost of rewriting 50 lines from scratch is so much lower than rewriting 10,000 lines from scratch?
I can't rewrite what I can't read. I would struggle to rewrite code I struggle to read.
> Or if you’re writing documentation and there’s an order of magnitude less code to write about?
Writing documentation based on LoC count is deranged.
> The whole style isn’t just “golf the character count”, but “collapse down the size of everything so one human attention covers a lot more of it, like zooming out to get a higher overview”.
And we circle back around: Lisp macros allow me to zoom out like that by letting me factor out common patterns more easily, like collapsing all possible "assignment pattern" instances to a setf invocation. I don't see the guts of how assignment is done for that particular kind of data when I'm focused on the other bits of the codebase. It's out of focus for me then, and it collapses neatly into one little form.
But, and here's the big sticking point for me, I can do all that without variable names which look like line noise.
If that's true to an unlimited extent, IOCCC programs should be nearly self-debugging, with how short they are.
This is "they all look the same to me" prejudice. IOCCC code relies on abusing undefined behaviour and clever tricks and edge cases, terse code like the OP link and the array languages relies on composing high level abstractions and using them over and over in learnable patterns. That aside, IOCCC programs wouldn't be "self-debugging" with this argument, they would "have fewer bugs". It shouldn't be too hard to accept that a given program written in 5k lines, 50k lines, and 500k lines would take different amounts of debugging effort. Whether it's a significant enough trade off of harder-to-code vs saves-much-debugging is a better question.
Throwing code away doesn't hurt my feelings.
And your feelings are the only potential cost involved in rewriting code, there's no time involved?
I can't rewrite what I can't read.
Of course you can? This is so common it's a programming trope "this is garbage code, nobody can tell what it's doing, let's rewrite it". Although I'm not sure that arugment works in my favour - Netscape's rewrite of Navigator which dragged on for years; was that because they /couldn't/ read the code to rewrite it, or because they couldn't rewrite it better while being bug-for-bug compatible?[1] Even then, writing one function and thinking "I could do this better" and rewriting it, then thinking "I could do this better" and rewriting it, is a cycle which gets much worse the longer the code is to rewrite.
Writing documentation based on LoC count is deranged.
But also necessary, unless you either a) admit that the extra LoC isn't saying anything important (so you have a poorly expressive language which forces you to write lots of boilerplate that isn't worth talking about) or b) you want to leave more of the code undocumented.
Lisp macros allow me to zoom out like that by letting me factor out common patterns more easily
"My way of writing dense composable code which outsiders hate and consider unreadable is great tho!", lol. LISP goes towards climbing an abstraction tower of code-which-writes-code, to get more done with less code, rather than simply /writing less code/. But this way, at least you agree that "less code is better" by factoring out common patterns? So why don't we push for common patterns which are shorter and achieve more, in normal languages (like we changed for loops to map in popular languages, eventually).
But, and here's the big sticking point for me, I can do all that without variable names which look like line noise.
Exactly what variable names do you think would make the code in the OP link "readable"?
Great. That doesn't preclude meaningful variable names.
It rather does; it's possible to give functions names, but when you have small functions and compose them together, it quickly gets to the stage where there's no meaningful English name to give them which helps clarify what's happening. In an APL example:
⊂ ⍝ left shoe encloses (nests) an array into a scalar
, ⍝ ravel unravels an array into a vector
⊃ ⍝ right shoe (NARS2000 APL) discloses a nested vector into a matrix
The pattern
⊃,⊂ 1 2 3
is common, it turns a 1D vector of 1 2 3 into a 2D matrix where the first row is 1 2 3. What would you name the pattern ⊃,⊂ that clarifies what it does and doesn't get in the way?
Once you've decided on that name, what do you name the data which goes into it and comes out of it, in a way that usefully explains what state it's in, without getting in the way? "matrix_of_grade_indices_into_data_array"? But you can see that from the way it's ⊃,⊂⍋data those words are what that combination of symbols says, only more precisely.
By the time you've written and read all those English words, you've hidden what the code does and covered it over with much less precise verbiage. The pattern is common because you can do different things to a 2D matrix than to a 1D vector, and the pattern is so short that you can inline it when you need it:
isn't clearer either. And you can say I'm making up stupid variable names to obscure it, but what variable names wouldn't obscure it? How doesn't this terse style lead away from "English word variable names"?
The C style code which I can't read, I am strongly suspecting that what it does is this kind of code-block combining, not IOCCC style compiler abuse, and in that case it won't help anyone with the skill to work on it, to rename "x" to "pointer_to_first_function_argument_in_backwards_compatible_adjusted_scope", because at the point where such a person understands the code, they can see that's what "x" is, because that's what "x" always is and besides there's only a page of code to see it in, in the same way that a loop index variable is easily read without needing a long name in "for (int i = 0" code.
I dunno about you, but I have very little working memory in my head, just a few items, and only one focus of attention with a narrow field of view.
I can’t read that style of code, but opening a traditional codebase with part of the code here using lots of symbols to achieve a tiny amount of work and the next piece of work is four screens away, using a lot of symbols to achieve a small amount of work, it’s like declaring legalese as the most readable kind of English because it bloats so big it must be readable.
judging by the only standard that matters, readability, this is not a good coding style
What about composable idioms and conventions which let you do a lot of data transforms with a little attention and time?
What about a small amount of code to maintain and refactor? If this style saved you ten lines it might not help but where’s the cutoff that you’d start to be interested - if it saved you 50% of the work? 75%? 99.999999%?
If “big count per line of code is constant” this ought to be an exceptionally low bug style, right?
What about having less code to demonstrate correctness of?
What about having little code to throw away and rewrite if you don’t like it, leasing to a faster refactoring cycle time and a better expression of your intent after s set time? Or leading to readability simply being less important because the cost of rewriting 50 lines from scratch is so much lower than rewriting 10,000 lines from scratch?
Or if you’re writing documentation and there’s an order of magnitude less code to write about?
Or if you should write as many lines of test code as real code, and there’s an order of magnitude less code to test?
The whole style isn’t just “golf the character count”, but “collapse down the size of everything so one human attention covers a lot more of it, like zooming out to get a higher overview”.
Crossing down from several files to one file is a phase transition, crossing from one long file to one screen, is.