By keeping the code as visible (read, small) as possible, I see more code and can better reason at a macro level. To scale this down into the micro level of dealing with individual compiler passes, I replace all the traditional programming paradigms with others in a sort of 1 for 1 exchange. In this way, I develop a new set of idiomatic programming methods that are so concise, they can begin to be read as we read and chunk English phrases. By doing so, it becomes actually easier to just write out most algorithms, because the normal name for such an algorithm is basically as long as the algorithm itself written out. This means that I start to learn to chunk idioms as phrases and can read code directly, without the cost of name lookup indirection. I can get away with this because I've made reusability and abstraction less important (vastly so) because I can literally see every use case of every idiom on the screen at the same time. It literally would take more time to write the reusable abstraction than it would to just replace the idiomatic code in every place. It's a case of the disposability of code reaching a point that reusability is much less valuable.This means that in those cases where reuse is valuable, it's very valuable, and it comes to the fore and you can see it as the critical thing that it is. It doesn't get drowned in otherwise petty abstractions that assist reusability, since we don't need that anymore.Furthermore, if I write my code correctly, there is very, very little boiler plate in the compiler. Almost none. This means that every line is significant. By doing this it means that you don't get the fun of feeling like you're accomplishing something by typing in lots of excess boiler plate, but it does mean that you have no wasted architecture. Because rewriting the architecture is so trivial, basically everything now becomes important, and you don't have petty book keeping code around. You know that everything is important, and there is no superfluous bits.The result, as mentioned elsewhere, is code that is getting continuously simpler, rather than continuously more complex. The code is getting easier to change over time, not harder. The architecture is getting simpler and more direct and easier to explain. Because it costs so little to re-engineer the compiler, I can do so constantly, resulting in little to no technical debt.This is an intentional synergistic choice of a host of programming techniques, styles, disciplines, and design choices that enables me to program this way. Give up one of them and you start to break things down. It allows for a highly optimized programming code base that has all of the desirable properties people wish their code bases have, and it scares people. I think that's a good thing. Because I don't want people to see this codebase as just another thing. I want them to see that this is something truly different. How can I get away with no module system? How can I get away with no hierarchy? How can I get away with having everything at the top-level, with almost no nested definitions? How can I get away with writing a compiler that is not only shorter, but fundamentally simpler from a PL standpoint than standard compilers of similar complexity by using only function composition and name binding? How can I get a code base that has more features but continues to shrink?By chasing smaller code. :-)I assure you, and I'll make good on this in another reply here, I could get you up and running on understanding the code and how it works faster than just about any other compiler project out there. In the end, one of the goals I want for this compiler is for people to say, "Woah, wait, that's it? That's trivially simple." The more I can push people to think of my compiler as so trivial as to be obvious, the more I win. The compiler really is so dirt simple as to shock any normal compiler writer.But to make it that simple, I have to do things in ways that people don't expect, because people expect complexity and indirection, they expect unnecessary layers for "safety" and they expect code that needs built in protections because the code is too complex to be obviously correct.I'm pushing the other direction. If you can see your entire compiler at one go on a standard computer screen, what sort of possibilities does that open up? You can start thinking at the macro level, and simply avoid a whole host of problems because they are obviously wrong at that level. When you aren't afraid to delete you entire compiler and start from scratch? What sort of possibilities does that open up to you?
 First, please let me apologize for my ill-considered and rude comment... cringe.Thank you for explaining. Wow, so much to chew on here. The naming conventions and trains sound really interesting. I can see how having a lot of the code visible on one screen would be a fantastic advantage. Again thanks for writing this up. Obviously I didn't find your code transparent at first glance, but clearly if one takes the time to understand what you are doing, the approach has its benefits. I look forward to reading more of what you post. And you've got me intrigued about APL.
 Your comments reminded me of this anecdote about Arthur Whitney:"The k binary weighs in at about 50Kb. Someone asked about the interpreter source code. A frown flickered across the face of our visitor from Microsoft: what could be interesting about that? “The source is currently 264 lines of C,” said Arthur. I thought I heard a sotto voce “that’s not possible.” Arthur showed us how he had arranged his source code in five files so that he could edit any one of them without scrolling. “Hate scrolling,” he mumbled."I suspect his code looks a lot like the J incunabulum:
 It does. Furthermore, he's "simplified" APL in K to require less infrastructure, with fewer primitives, and the like. Combined with some clever, and some would argue, devious programming practices, he's able to keep things pretty small. I don't know if the interpreter is still that small, though. If someone reminds me, maybe I can talk about scrolling. :-)Since I believe Whitney wrote the J incunabulum, I suspect that it looks very similar. The code is actually quite simple and straightforward if you take the time to read it.
 Could you write a blog post (probably needs several) about the code style, architecture and design of your compiler and the idioms that you talk about ? I love the idea about keeping a project code base so small leveraging concise idioms so that everything fits in a meat-bag head, but have no idea how one goes about achieving that in practice. (Learning APL to get some pearls of wisdom would be fine)
 It's something I've been working on for a while, but because the architecture is under constant flex, it's actually more valuable to be able to know how to "experience" or discover the architecture in the compiler code itself than to have a separate document to follow, since it's very easy for that document to get out of date quickly. I am building up a set of documents that discuss some of the core idioms and ideas though, and I hope to have something come of this live session that I can maybe put into an interactive document that people can work with.
 The little essay you've given us in these two HN comments is one of the most brilliant things about programming I've ever read.
 Two things I want to say/ask.1. What happens if you get sick. You say this is a project in production and there is money on the table (I assume not only yours). What if you get sick and are unable to work for 3 weeks or 6 months. Don't you think that this code is very hard to grasp to someone else, who would have to temporarily work on your postion?2. It is weird, that you wrote such a long essay, spanning two comments, but it has so little examples from the actual code. Usually when people explain stuff they go between the abstract concepts and how they are materialized in the code. Here you only explain the idea behind writing it and how it makes you feel/operate/gain flexibility and performance but the closest to the code information I've got from it is that it has compiler passes and that it has a C++ runtime in a string variable. Just a thought, what do you think about that?
 At this point, if I get sick, the code doesn't move much. If I were permanently disabled, this someone else could take over. I have people contribute bugs, tests, and other things fairly often. If you had to temporarily work on the code base and weren't familiar with the background of the project, I would say you'd be lost. It's just not the sort of thing that you can start tweaking things here and there so easily, because almost everything that needs changing is a matter of addressing architectural or serious questions that require you to really understand the project. Because of the way the code is written, there's basically no "code monkey" type work. That means that you only do meaningful work, but it also means that only people who are knowledgeable architects can work on the code. You can imagine the same thing in other code bases. Imagine that you didn't need any of your lower-level programmers anymore for work because there was nothing for them to do. Now imagine how the bus factor changes on the code when only your chief architects are necessary for working on that code base. That's very nice in one dimension, but it does create quite a different picture.You're right about the code examples. I figure that people were already posting some code snippets. I wanted to give the big ideas rather than any specifics. The reason for this is basically that if you take any single line of code out of context, it's a bit hard to explain why I'm doing the things that I'm doing. It's very much a macro design, which is why I am offering the live session to go through. It's sort of, but not quite, an "all or nothing" thing. if you let me sit down with you and go through the entire code base, then I can explain how it all fits together and why things are the way they are, but if you just take a single piece of code out, you're missing the picture.If I took a single compiler pass, out, for instance, you'd have between 1 and 12 lines of code to look at. I could explain a few features, but how would I explain that when you look at this piece of code you're able to see it entirely in context? Well, I can't, because the code it completely out of context at that point. Or what about demonstrating how the naming conventions exhibit structure informative regularity? Again, I can't, because that's a visual design element of the code. It's something you have to "see" by looking at the whole painting as it were.The naming convention is actually a great example. Out of context, there's apparently no rhyme or reason to it. But in context, it forms a key component to the visual regularity and continuity throughout the code. The names are an important part of how you can see the structure of the code. It helps to orient you in the big pie. But if I were to quote a single line here, there's now pie to look at, no sky to navigate by. It's just a single constellation. By analogy, it does less good to say, here's the Big Dipper, it's useful. But why? Because it's easy to find amidst the context of starts and its shape helps you to find the North Star. But on its own it doesn't seem as valuable. At that point it is just another constellation. The same thing happens with this code.So I'll go through and explicate it all in detail in the live session, where I can provide the "painting" and workflow in its entire so people can see how it works. Then you can see how my comments here match up with the code.
 geocar on Feb 7, 2017 [–] Something that might be worthwhile to consider is the fact that someone who wants to make a change, only needs to look at a small program instead of a large program.In the large program case, the programmer feels like they can cross-cut it, install some duplication, and yes: get their change done faster, but at a cost of making the program bigger.But in the small-program case, you only pay the cost of learning the codebase when you add a new programmer to it -- something that happens very infrequently. Your program stays small, and you gain all the benefits therein (faster, fewer bugs, and so on).
 This is really admirable stuff and I share this kind of goal even though I'm not working in APL style at this time, though I understand the appeal of shifting in that direction as more of the code gets abstract - and it necessarily should be so abstract if you're trying to maximize the simplicity. I believe most codebases suffer from prematurely abstracting with the easy stuff built in the source language(classes, generics, etc), and then not having the abstraction they really need when it's necessary, and being too tangled up to build it.The only problem is that I don't know where to start if I wanted to study what you're doing and take notes. Those millions of lines of changes are still lurking in the background as building blocks for an overall understanding.
 The live session would be the first start, obviously, but you can also see the Publications area of the README:https://github.com/arcfide/Co-dfns#publicationsSome of that deals with the micro and some with the macro level ideas, but there are some key elements in those that will be necessary to appreciate the whole thing.
 Don't complain that Chinese is ugly and unreadable just because you speak English as your native tongue.That's a great counterargument, and one I fully agree with. I've noticed that over the years, there has been a growing trend of promoting "readable, maintainable, clean, insert-fashionable-adjective-list-here code" which really amounts to a lower-common-denominator, dumbing-down perspective of how software should be written. In their perspective, code that someone does not immediately understand is "bad", seemingly regardless of how much (or little) knowledge that someone possesses. I think this is ultimately a harmful trend.The opposing view, which appears to be largely a minority in more mainstream language communities but dominates in others like APL and Asm, is that programming languages are essentially like human languages: they need to be learned, are not necessarily "easy" or "familiar", and this learning and eventual mastery is wholly beneficial to their use. As with human languages, it is not expected nor a problem that a beginner will immediately understand code written by a more advanced user. Instead, the beginner progresses by learning the language and eventually becoming an advanced, "literate" user. This can be summed up in one sentence: "The code is unreadable because you are not yet qualified to read it." ;-)
 Taking an example from the parent:> rth,←' array v=array(z.s,zs.v.type());v(0)=zs.v(0);\',nlI don't know APL or the Rth variant, but this goes against most standard style guides. There may be good reasons for it, but they are not obvious to an external viewer.- What are 'v' and 'vs'? Is this quickly obvious from context (to anyone but the author) (where are the comments?)- Why are they single characters (non-descriptive variables are almost always a code smell)- Why does it do multiple things on one line? I think this is a limitation of use of Rth? Normally this sort of whitespace compression is verboten, even in functional languages like Lisp.I think the parent has a good point about this being very difficult to understand code, and the OP has confused terseness with quality. If this code followed traditional coding styles it would be easier for new people to understand what the hell is going on, and would probably 3+ times longer in LoC. But who the hell uses LoC as a valid metric anyway? Besides the worst sorts of manager, of course...
 but they are not obvious to an external viewerThat's precisely the point. The whole philosophy of APL is that it's not supposed to be obvious to anyone who doesn't (yet) know it. However, seeing as the character set is still Latin, it's not hard to guess at what it does even if you don't know the language.- I don't see 'vs' in the snippet, but would guess V stands for Vector.- Even in all but the most anal "standard style guides", single-character names are normal for temporary/limited-scope variable names.- You could likewise ask why Chinese words aren't separated by spaces, or why English words need to be. It's a different language with its own grammar and style.I don't know APL either, but at least I make an effort to see their perspective on the language, because it is clear that there are people who are highly proficient at working with code like this. (Likewise, I would guess that experienced APL'ers probably find more "traditional" languages like C, Java, Python, etc. "unreadably" verbose.)
 V for Vector is an appropriate, but not the only, interpretation for that letter.In the case of this compiler, I take an opposite convention. Most single character names are globally meaningful, and their meaning rarely, if ever changes across the whole compiler. As names get longer, they progressively represent more local elements. This is done in a way that reveals that nesting structure, but is also done because over time I realized that it was harder to remember from one patch to the next what the local variables were meant to do, rather than the global variables, which were almost always the same all the time and were much more likely to be in mental cache. Therefore, I used more "information", that is, more characters, for local names that I would more likely forget the meaning of later, than for global names that were universal and almost always on my mind.And yes, I did try doing this compiler in many, many other styles, including C, C++, Nanopass Scheme, ML, Java, Cleanroom, traditional APL style, and so on and so forth. They were all unreadably verbose and difficult to work with and very hard to make forward progress on.
 There may be good reasons for it, but they are not obvious to an external viewer.But isn't the real question whether that obviousness is more important than ease of comprehension and maintenance for someone who does have the required skills to work on the code?To a child who has just learned squares and square roots and who has never encountered TeX, the expression $e^{i\theta}=\cos\theta+i\sin\theta$ is probably just line noise. To a practising mathematician, it is immediately recognisable and a useful tool. Obviously the difference is that the experienced mathematician has learned the underlying concepts and the notation to represent them. The result is that while the teenager might be learning double-angle formulae by rote for their trigonometry exam in a few years, the experienced mathematician could use their more powerful tool to derive those formulae or any variations on the theme in moments whenever they need them. Their greater skill and understanding makes them much more capable.There are certainly reasonable arguments for making the code for some projects accessible to new developers, but doing that isn't free if it also means compromising some aspect of that code for current developers. It's a trade-off, and sometimes requiring new developers to have a certain level of skill and understanding before they can work on a project is OK.
 Well put. The important thing is to see what the tradeoffs actually are. Unseen tradeoffs often look like obvious wrongness.
 geocar on Feb 7, 2017 [–] x,←y simply appends y to x; x,←y,nl just appens y to x, then adds a newline. '' is just a string containing C code. I think you're making this more complicated than it is.
 I would be more sympathetic to this argument if the code was visibly a collaboration.I am perfectly willing to believe that I could reduce the size of my code by a factor of 10, maybe even 100, if I was willing to give up the constraint of making it maintainable independently of myself. I think that would be a poor tradeoff to make in most cases.
 You have a great point but I would state it in a positive way. What sort of system could a small team build if more than one programmer (let's say 3 or 4) could maintain the intimate familiarity with a small codebase, and consequent hyper-productivity, that arcfide is describing?
 You're clearly very enamoured with this approach; I'm not. I've seen it before (as arcfide is reminding us, APL has been around for many decades; I find the Forth philosophy similar too) and I think it's a dead end, a seductive trap. You can't build for single-programmer productivity and then retrofit maintainability afterwards.More generally I think choosing tools based on small examples is a big systematic bias affecting the industry; I can absolutely understand why people do it (because who has time to compare large systems) but I think it holds us back, and I think this particular programming style games that metric even more heavily than most, meaning people falsely attribute advantages to these languages that don't exist in the real world. I think the scepticism a lot of people are showing here is very healthy and frankly I'm surprised you don't share it.
 You can go through the Dyalog meetings and see how APL scales up and down along the spectrums.I'm glad you think my compiler is a small system. The problem I'm solving is one that people said was simply too difficult and impractical to pursue. If I have made it so simple as to be dismissed as trivial, then that's good. :-)I'm happy to walk you through the compiler in the live session and let you decide for yourself just how maintainable it would be if you had to pick it up. But this code base has been designed with maintainability in mind from the beginning.How big is a big system? You've called this a small system, but it's a compiler with commercial backing/funding that compiles a language used in production systems, and is, to my knowledge, the only compiler able to express core compilation algorithms in an efficient manner on the GPU. It's rapidly moving to the self-hosting point, and at that point we will have a complete compiler that compiles a real language that runs completely and entirely on the GPU, from parser to generator.To give you an idea of this task. A basic scan primitive implemented efficiently on the GPU in the neatest and cleanest code that I know of published in the literature is 100 lines of code. If you compressed it, you could probably fit it into 50 - 70 lines of code. That's for one simple operation that takes anyone a single line of C code to write.This project has taken a real compiler (it's not a C++ compiler, of course) and is putting it on the GPU. Is this a small system?I would put it in the realm of the sort of problem that can only be meaningfully solved by simplification.However, this isn't the only code base around. There's another company who has a larger team of APLers who maintain over 1 million lines of APL code in production. At that scale they have to make different design choices than I do, but they also say, if they can do it in APL, they do, and they wish they could do everything in APL. They are one of the only groups, to my knowledge, who has been able to see a net gain in value from implementing a static type system on top of APL's core. So, in terms of scalability, yeah, maybe you need something more (like a static type system) as your code grows, but if you manage to need 1 million lines of APL for your problem, then you're in a good place.Still, just come to the live session and we can discuss all of the issues that you see with maintainability. If you can see a way to make the code simpler and easier to reason about at a macro level, I'll be all for it!
 > I'm glad you think my compiler is a small system. The problem I'm solving is one that people said was simply too difficult and impractical to pursue. If I have made it so simple as to be dismissed as trivial, then that's good. :-)I figure anything being done by one person is necessarily that trivial. Maybe you're doing the work of 100 people. Maybe the work of 1000. But you can't scale arbitrarily far; at some point you'll hit your limit. The amount of work one programmer can do is, ultimately, O(1).> they also say, if they can do it in APL, they do, and they wish they could do everything in APL.Fair enough; where I'm working there's a rather different view of the APL parts of our codebase.> Still, just come to the live session and we can discuss all of the issues that you see with maintainability. If you can see a way to make the code simpler and easier to reason about at a macro level, I'll be all for it!I can't/don't do audio/video/"live" I'm afraid (and if that's the only way you can explain the code then that itself reflects badly on its maintainability). I'll read a transcript with interest.I do think the value of conciseness is real and underrated - at the same time it's very possible to overestimate it if you're looking right at the transition point between a project being small enough to keep in your head at once, because if your project is very close to that line then you can reap huge gains from small conciseness improvements but not in a way that scales. I once looked at implementing a lot of the APL operators (Scala supports unicode identifiers and has a very flexible syntax, so you can actually get pretty close). But I've found that, at least in the context of a large codebase moving incrementally (and I firmly believe that's the one that ultimately matters, for the reasons above), the conciseness gain isn't worth the cost of not having clear English names for all the operations. Indeed I now try to move away from symbols and short names in general as much as possible.
 dang on Feb 5, 2017 [–] > You can't build for single-programmer productivity and then retrofit maintainability afterwardsPerhaps my use of the word 'maintain' was confusing. I'm not suggesting that one programmer write such a system and others then take over maintaining it. I'm suggesting that 3 or 4 programmers write (and maintain) such a system together and all be intimately familiar with it.
 Sure, I get that. I just think a language needs to be built from the ground up to allow multi-programmer collaboration, and that there are few if any valuable lessons to be taken from what works in the single-programmer case.
 You're asserting that this isn't multi-programmer friendly. I'll agree that it's not "code monkey" friendly, but I disagree that it is not oriented towards multiple programmers. And the APL language has almost all the features you would expect from a modern multi-paradigm language, including branching, control structures, recursion, exceptions, objects, frameworks, interfaces to other languages, and so on and so forth.But APL was designed from the beginning to enable human communication. I would argue that almost all programming languages fail to be a good human medium of communication. The evidence I give in support of this assertion is that if you look at how people write when they think the computer won't need to see the code, such as in academic publications on computer science, see what they use in the paper. Almost all of the people who implement their ideas in one language or another fail to include the entire code in their papers, and they usually include some mathematical notation and diagrams to explain their ideas instead. They may include some small snippets of code, but they rarely if every include the full code. Dan Friedman being an exception that proves the rule, if you will.If you then take a look at how APLers communicate when they have ideas, you see code all the time, all day long. The APL community is the only one I've seen that regularly can write complete code and talk about it fluently on a whiteboard between humans without hand waving. Even my beloved Scheme programming language cannot boast this. When working with humans on a programming task, almost no one uses their programming languages that primary communication method between themselves and other humans outside of the presence of a computer. That signals to me that they are not, in fact, natural, expedient tools for communicating ideas to other humans. The best practices utilized in most programming languages are, instead, attempts to ameliorate the situation to make the code as tractable and as manageable as possible, but they do not, primarily, represent a demonstration of the naturalness of those languages to human communication.
 Academia is its own thing with its own incentives. I wouldn't generalise from what happens in academic papers.When I see people communicating in (my part of) the industry they use pseudocode, which is often described as looking like python. They use if anything fewer symbols (and more space) than a real programming language. They do indeed elide parts of the code - often things like error handling.To my mind that says: we should use languages in which code looks like pseudocode/python (this idea was suggested in http://paulgraham.com/hundred.html , though he takes it in a different direction). And we should look for ways to elide in real code the parts that people like to elide when talking about programs: to e.g. have "ambient" error handling that's more-or-less invisible most of the time, without sacrificing the safety advantages of checking error cases (this is why I'm interested in e.g. effect systems).
 I'd be very surprised if your industry really did use complete pseudocode and only elided error handling. On the other hand, you're sort of assuming in your conclusion that pseudocode is the "better way" for languages because that's what people use, but you're leaving out the initial bias. I would argue that if you made current industrial languages more like pseudocode, you'd probably do better, yes, but it's a local maximum derived from an assumption of what the end result will be.In other words, people use pseudocode because it's close to the code they intend to write and represents their current notational expectations. It's an enforcement of legacy methods of thinking.But many people have admitted that there is a problem with writing pseudocode style programming for modern hardware performance, where taking advantage of parallelism is important.Furthermore, I would argue that academia is relevant because it's one of the few places where the ideas are more important than the executable. If the ideas are communicated clearly, then you've succeeded. If we really want to program for the human, then we want our programs to be focused on the communication of ideas, and not machine-focused. And the reality is that if you take the machine away, and focus on human-to-human communication, without any "industrial" bias (expectation of machine execution), then rigorious idea communication is almost always pictorial, visual, and ideographic. Fruthermore, the notations that people develop and have developed over time to communicate ideas never end up looking like mainstream programming languages. As people work with ideas, math notation is the quintessential notation for communicating human ideas rigorously. It is highly evolved for human consumption, and manipulation, rather than machine-focused.I believe there have also been some studies on how people describe processes without any computing background, and it's inevitable that many of the core "serial" programming concepts are not "natural" in human though, but a very acquired taste.Again, I would be surprised if you put a bunch of industry or non-industry professionals up to a white board and had them illustrate their ideas rigorously to one another on just that whiteboard, that they would naturally gravitate to any real programming language. And I doubt strongly that they would actually continue to use pseudocode at scale on the whiteboard.
 > I'd be very surprised if your industry really did use complete pseudocode and only elided error handling. On the other hand, you're sort of assuming in your conclusion that pseudocode is the "better way" for languages because that's what people use, but you're leaving out the initial bias. I would argue that if you made current industrial languages more like pseudocode, you'd probably do better, yes, but it's a local maximum derived from an assumption of what the end result will be.Error handling was one example - I see concerns like serialization, permissions, transactionality commonly elided, and I look for better ways to handle them in programming languages as well.> I would argue that academia is relevant because it's one of the few places where the ideas are more important than the executable. If the ideas are communicated clearly, then you've succeeded.Maybe. That assumes that the successful papers (and successful academics) are those that communicate ideas clearly. I'm not convinced.> the reality is that if you take the machine away, and focus on human-to-human communication, without any "industrial" bias (expectation of machine execution), then rigorious idea communication is almost always pictorial, visual, and ideographic.Not my experience at all - if anything I'd say visual aspects tend to be a marker of less rigorous communcation.> Fruthermore, the notations that people develop and have developed over time to communicate ideas never end up looking like mainstream programming languages. As people work with ideas, math notation is the quintessential notation for communicating human ideas rigorously.Mathematics is one such notation; "legalese" is another, and philosophical terminology a third. I'm wary of generalising too much from mathematical notation alone.
 > Not my experience at all - if anything I'd say visual aspects tend to be a marker of less rigorous communcation.I would point to the field of combinatorics, the traditional proofs of both the ancient Chinese mathematicians as well as those of the West, both of which took on various elements of geometry and spatial reasoning for a significant number of their proofs when other tools were not yet available. The development of algebra I see as a chiefly visual and ideographic one, even tangible or malleable one. The development of UML diagrams another. Flow charts another. We have the abacus and Chinese counting sticks, as well. And finally, while poetry is not specifically rigorous, it is efficient in a way that few other communication methods are. And we find a great deal of "visual cue" elements in that field. In physical sciences and statistics, visualization is a very important tool. Mathematical notation itself is largely spatial and visual at scale.As for legalese, I would argue that legalese is perhaps well designed for experts to be complete, but not for clarity. Comprehensiveness is different that clarity of rigor. And as for philosophy, vocabulary is not enough. And you'll note that some of the best notational systems to arise came from the philosophy departments in working on logical systems. Those are all usually notationally represented using ideographic, rather than natural language forms. And even some Eastern philosophers who wrote very verbosely tended to make their arguments from visualizations in the mind to make their point.Musical notation, again, has evolved into a spatial, visual notation. A large number of traditional writing systems were ideographic, including ones we now consider alphabetic/phonetic.
 dmitriid on Feb 6, 2017 [–] Codebase and it's terseness rarely matters. Understanding business processes that govern why the code exists is usually much more important than the code itself.After that familiarity of code comes first. And by familiarity I mean: common patterns, common solutions, ability to bring new people into the fray.Small terse languages tend to breed long-running small teams with an insanely high bus factor.