Logic programming really falls in to two main categories: search and constraint satisfaction. In that light, there's really only two programming paradigms: Temporal and Spatial.
Imperative programming is programming with time. Functional programming projects time on to space. When you add concurrency, either effectively becomes logic programming: Search runs multiple spatial processes in parallel. Constraint satisfaction treats each constraint variable as its own timeline.
We only don't think of it this way because we've traditionally linearized logic programs. You can linearize search with backtracking and you can linearize constraint satisfaction with a constraint propagation queue.
Think about a mutable variable. In an imperative program X can start with A, then become B, then become C, etc. In a functional program, you can say that X doesn't change, instead there's really a subscript: X0=A, X1=B, X2=3. That is, you're projecting the time dimension, which is implicit, linear, etc, on to a space dimension: The name of the variable expands from ["x"] to ["x", 1] etc.
Monads are the (an?) ultimate exploration of this idea. You project the timeline of the entire world on to a space dimension. Each "bind" operation builds up a new world and you can have many forks of the world, as it's just a model, not the world itself. This model doesn't do anything until you "run" the monad by handing it off to some higher level interpreter.
As for constraint satisfaction, check out Sussman's talk "We Really Don't Know How To Compute" <https://www.youtube.com/watch?v=O3tVctB_VSU> - Even though they implement the propagator networks with a message queue and a single thread, you could in theory run each propagator in parallel, sending messages back and forth.
Writing another comment, I had the thought that time is still present in the functional paradigm, though not in the same way as it typically presents in the imperative paradigm. This is a great way of thinking about it, thanks.
This is about implementation strategies, which I think are a lower-level idea than paradigms. Datalog is a logic language that can be implemented without using backtracking/"search", for example.
Compilation and interpretation are also implementation strategy issues, unrelated to programming paradigms. Almost any language can be implemented using either strategy, and most systems we think of as "interpreted" in reality employ a mixture of the two strategies.
I don't know what you mean when you say they are also spatial and temporal in nature. You are being extremely glib, and I don't really understand what points you are trying to make.
Sure, but besides maybe Clojure, I don't know of any homoiconic language that'd be popular enough in the industry that you'd expect questions about this showing up on interviews...
Is "async loader" the thing where a page comes up as nothing but a list of URLs, unless you turn on Javascript? I see that with Instagram links - it's not even blank, it's just a plain list of URLs.
Async just means that there is a script, which, some time after the page is technically "fully loaded", starts to load some more! That's useful for updates or actions that should occur without a complete page reload (which would discard the "user state").
It doesn't necessarily mean you see an URL list. It can be implemented by storing the list in Javascript code - and you don't "see" that Javascript code.
The method you mention (mis)uses HTML Tags as datastore for the URLs list.
A must-read if you're interested in the topic: Programming Paradigms for Dummies: What Every Programmer Should Know [1]. (Visual summary thanks to Wikipedia: [2])
In short, in this view, programming paradigms are traits of languages, or more precisely a programming paradigm is the particular combination of traits that you include in your language.
A single addition or removal of one language trait can make the language feel quite different.
> A single addition or removal of one language trait can make the language feel quite different.
I agree with this, and in this is why I, to some degree, disagree with the idea of a 'polyglot programmer', in the sense of an experienced programmer having general "software engineering skills", appreciation of OOP principles etcetc with the idea that they can quickly pick up any language from having a base set of sw skills.
In reality, understanding the specific implementation of many languages (all of which differ), and the nuances of the features of the languages are important. A single trait can make a significant difference the way you should use a language, not to mention differently-evolved community standards/expectations.
I think most people mean "polyglot programmer" when they do in fact know several languages in detail. I've got a number of languages for which I am not "a world-class expert" but I am certainly an "expert", able to structure significantly-sized systems with best-practices and a fairly detailed understanding of all relevant costs and benefits I'll get from it. Precisely because I've picked up so many languages, I also know that the hardest part of that is what you point out, that every single difference ends up affecting the best practices. With the exception of the very heavily bondage-and-discipline languages (Haskell, Rust, Ada, Idris), the mere act of learning syntax and most of the semantics of most languages is often a less-than-one day task.
True, but I've known people use it to mean something like language-agnostic technologist, with the idea that they have enough experience in one or two languages that they could pick up anything else at senior level.
> Precisely because I've picked up so many languages, I also know that the hardest part of that is what you point out
This is exactly it. The people I describe have a few similar languages under their belt, and simply extrapolate from this.
I wouldn't, for example, assume a experienced senior java dev could pick up Scala at the same level within a few months, despite the similarities; they simply lack experience with the new traits.
In many ways it's a worthy successor or sequel. It walks you through different paradigms, and shows how everything works with Mozart/Oz. A fantastic language with big influences from Lisp and Prolog.
CTM is a bit more focused on programming language theory, it even has a formal semantics appendix. CTM is highly regarded in Lambda the Ultimate as a basic primer to programming language theory. In contrast SICP is a bit more hacky. It shows you how to build some abstractions, whereas CTM is more about their semantics and how to build programs using different paradigms.
Along with SICP, CTM is easily my favorite programming book. Close seconds are PAIP and TAOP (The Art of Prolog). I wish we could have a Lisp with Mozart semantics and a great ecosystem.
a pete peeve i've had about "goal based programming": it always sounds like people are just arguing for a higher level of abstraction. is that essentially what it is? people always say "i want to tell the computer what to do not how to do it", and i feel like that ignores the fact that right now you are able to do that to some degree. in my chosen languages, i dont have to allocate memory when i make an array. isn't that me saying "i want an array, i dont care how you do it" and the computer obeying? help me understand how "goal oriented programming" is a difference in kind and not degree. to me, it just looks like an argument to walk higher up the ladder of abstraction.
Goal based programming generally implies non-determinism of some sort. In a way, it is "just" higher up the abstraction ladder, but it is pretty profound in its presumptions in terms of how you program. Languages like Prolog or SQL have a master algorithm behind the scenes that accomplish your goal, and so you spend a lot more time expressing the data and constraints on the data than on how it's done. In particular, it is hard to make assumptions about computational complexity or performance. In practice you wind up in an dual problem solving pattern where you alternate between goal/problem expression and tinkering under the hood with the system internals to improve performance.
Whether that's good depends on the problem... certainly SQL has been pretty successful.
But "at a high level of abstraction" and "leverages non-determinism" still isn't enough to get to "goal oriented". Garbage collection involves both of those things. So does JIT. And constraint-based layout engines. Although that last one, unlike the first two, is goal-oriented, right? Is there a pattern?
You're right, they're not enough. I guess it really boils down to making the computer reason about the data it has to achieve a goal or answer a question, rather than applying algorithms to data. The algorithms are intrinsic and hidden. https://en.m.wikipedia.org/wiki/Reasoning_system
Constraint based layout engines are such an example. Business rules engines are another. It's the fuzzy dividing line where we start considering programming techniques to be AI instead of conventional. This line is somewhat arbitrary based on history.
"It's the fuzzy dividing line where we start considering programming techniques to be AI instead of conventional. This line is somewhat arbitrary based on history."
Imperative programming spells out exactly what happens next; even if it's some high-level abstraction like "findAnswer()", that's still telling us what happens next, and we can jump to the definition of "findAnswer" to see what lower-level step comes next (and so on).
In functional programming, we're still specifying "what to do next", but we're allowed to give a set of things to do; the language accumulates these tasks, and is free to perform them in any order, or even concurrently; the result will be the same regardless (due to confluence). For example in an expression like 'f(g(x), h(y), [a(b), c(d), e])' we're telling the system exactly what to do next, although it's free to perform these function calls in any order it wants (many real implementations choose to define a particular evaluation order, to make e.g. reasoning about performance easier).
In both of these paradigms, the solution to our problem is left implicit: we indicate a solution by the lack of next-steps.
In logic/constraint/goal-driven programming the answer to "what happens next?" is undefined; we haven't told the system what to do next, so it's undetermined. Instead we've told the system when to stop: we make the solution explicit and the next-step implicit. The runtime system has to guess what to try, so it shuffles symbols around and around, stopping if it stumbles upon anything we've designated as a solution. Again, real implementations do define their evaluation order more explicitly for the sake of performance (e.g. depth-first search for Prolog).
There have been a few successful goal-based programs. Not many.
- TK Solver [1] Once called "the crooked accountant's spreadsheet", it's a spreadsheet like program where you can change the outputs, and it will try to compute a consistent set of inputs. Good for "what if" problems.
- Kang. This was an early hacking program. It took a set of attacks, and given a starting state (such as "user not logged in") and a goal state ("kernel mode execution") would try to use its tools to reach the desired state.
- Map route finders. Specify start and goal, and a route is generated.
The thing that distinguishes genuine "declarative" or "goal-based" programming is that the order of expressions in the source language doesn't affect the order of execution[0].
On the other hand, I see the level of abstraction as (roughly) the average number of machine instructions executed for every statement or expression in the source language.
With these two definitions, its clear that while "declarative" languages will almost necessarily be at a high-level of abstraction, you can also have languages that operate at a high-level of abstraction without being declarative.
caveat: the original link wouldn't load for me, so I don't know if my response makes sense in light of the original article.
My point is that abstractness and imperativeness are orthogonal. For instance, Bash and C are both imperative in that they both structure programs as a sequence of commands. However; Bash operates at a much higher level of abstraction because each command "does more work" (by which I mean roughly translates to more machine instructions).
Prolog is both abstract and declarative. Prolog programs are structured as a set of propositions (i.e. declarations). The difference between Prolog and Bash is greater than the difference between Bash and C, because the nature of Prolog implies a fundamentally different way of designing programs.
One of the canonical examples of goal oriented programming would be sorting a list.
Rather than expressing how to take an arbitrary list and rearrange the elements so that they're sorted you express what means for a list to be sorted and let the algorithm figure out how to get there.
* An empty list is sorted.
* A single element is sorted.
* If one splits the list into its first element and its remainder then it is sorted if the remainder is sorted and the first element is less than or equal to the head of the remainder.
This doesn't lead to an efficient sort but it is enough for an algorithm to take any list and produce its sorted form. You can, with the right constraints, get a goal oriented system to carry just about every sorting algorithm and the benefit is that they're really concise and they read like the high level pseudocode you might see in an algorithms class.
if this was true, why would we have so many sorting algorithms in the first place?
It's not enough to express the desired result. We would also need to express our preferences for all the other decisions and trade-offs that are made during software development. Do we need this sort to be fast or use as little memory as possible? Synchronous? Does it need an index? Are we optimising for writes or reads? Persistence? What language are we sorting by?
That's for a simple sort. By the time you get into actual problems then it all gets more complex and the trade-offs become something you need to understand before making a decision on. Having an expert to make those decisions is good.
One advantage could be ease of refactoring.
Imagine starting with "goal: the list is sorted", but the resulting program is uses too much space to achieve it.
It could be a lot easier to add "goal: the list is sorted in O(n) space" than it is to rewrite your sorting algorithm.
i dont see how this is different from any other built in function, or external library really. perhaps it seems different because there are alot of well known sorting algorithms that have different strengths and weaknesses? there are any number of ways of summing a list (most of them stupid), does sum() qualify as goal oriented? i dont think what your saying is without merit, but i do think the taxonomy needs to be discarded if thats what its referring to.
The difference is that those three statements encapsulate the entire source code required to perform a sort. It's not calling any kind of built-in sorting function. From those constraints the system is able to logically derive the sorted list.
> those three statements encapsulate the entire source code required to perform a sort.
It is calling the resolver system though and all the accompanying functions that actually do the sorting.
Not sure what is your definition of "source code", but I'm pretty sure nobody counts external library function implementations as source code for the program. Same as you don't count OS kernel as part of your program's source code.
When you use a library to sort an array, you're not telling the computer how to sort an array, nor are you telling the computer what a sorted array is. That distinction is for the person implementing the sort, not really as relevant for someone just using it.
interesting. So merge sort might be something like?
* empty list is sorted
* a single element is sorted
* merging two sorted lists results in a sorted list
Are there languages that could take this and do merge sort?
To me, "goal oriented programming" sounds closer to specifying your program's behavior somehow and letting your programming environment figure out the best implementation on its own.
It'd be kind of like test-driven or behavior-driven development, except that the programmer only writes the test cases, and the programming environment generates a program that passes the tests.
There's some truth in what you say, but there's a leap that we have not yet been able to make. The way I think of it, anyway, is that we'd like to specify just the contract (preconditions and postconditions) of a module, and have an efficient implementation synthesized automagically (complete with well-designed submodules with their own contracts).
That capability would be more than just another rung up the abstraction ladder.
Interestingly, the problem of reasoning about programs at that level seems to be pretty nearly "AI-complete". This is true even though it's a formal domain -- one doesn't need, for example, the intuitions about the behavior of physical objects that we humans acquire through years of experience, nor an understanding of human behavior and emotions, etc. etc. We work pretty hard to make sure the behavior of a program is predictable just from understanding the program itself (there are occasional exceptions, of course). Yet even reasoning in such a restricted domain, about even very pedestrian programs, is beyond the state of the AI art at the moment.
It's a particular kind of abstraction, and it's definitely not new -- others have already mentioned SQL and Prolog, but there's also Make.
The way I think of declarative programming is that it's essentially executable data; not in the manner of lisp, but a description of the problem space (usually as a set of constraints) is translated (compiled, if you will) into an execution plan that satisfies the description.
Declarative programming is the region where you've crossed the (fuzzy) border from eliding the small hows (e.g. memory management) to the big hows (e.g. the order to build components).
At a certain point, enough of a difference in degree changes the way you express a problem or thought, and at that point it's become a difference in kind. A loose aggregate of sand eventually becomes a pile when you've added enough.
An array still isn't a part of the problem domain, for most problem domains. It was introduced as part of your solution but it isn't in the same vocabulary.
A great example of transferring between problem domain language and implementation language is in game scripting. The game might have internal stuff for allocating actors and giving them various attributes, with asset references, state machines, etc. But when you go to script your behavior what you want to work with is "do this sequence of events in order, sometimes pausing, interrupting or branching it." And while you can do this by formalizing a new state machine each time, that's actually too powerful an abstraction to use to populate a script-heavy game full of one-off cutscenes, and many games will therefore go domain-specific and create a special cutscene system that limits the range of programmability.
So what I see goal oriented programming (and related ideas like model-oriented or intentional programming) arguing for is more along the lines of "principle of least power" - instead of wielding the most powerful abstractions directly, you invent a less powerful one to drive them, often compiling from less power into more power as a build step.
This is different in nature from high level abstractions intended to add more general-purpose leverage and user control over the problem definition, which garbage collection, metaprogramming, and formal-proof tools aim for.
There are many problem statements which operate on constant memory or employ a predetermined set of algorithms with known memory usage patterns, and they might employ one or more of these high level leverage tools as an intermediate to compile the definition to running code, without the user model or the runtime model needing them.
I wish i could look it up but the site seems to be down. I've seen it before.
But it sounds like something I experienced before. You struggle to try to understand what this "new" thing is that's hot and it doesn't seem new to you at all. It frustrates you as you see other people talking about it excitedly and you feel like you're missing something.
In my experience that's all it is. It's not new but a different spin on a long established and understood concept.
I think there is a fourth category not covered by the OP, which is Generative Programming[0].
This paradigm does not have a fundamental reduction nature (the opposite, actually), is not concerned with memory cell manipulation, and is not limited to predicate calculus (though its products can certainly be used for such).
Well known examples include C++ templates, Scala macros, JavaScript/Perl/Ruby evaluation of text blocks, Aspect Oriented Programming[1], amongst others which I am sure I have unwittingly omitted.
It doesn't really compute anything though - other than producing more code which is in one of the core paradigms. I wouldn't argue that this is a paradigm in the same category. It's a very handy set of (often very complex) shortcuts. C++ templates at one level just save you a lot of typing. At another level they let you do type-level computation, but that's just shifting execution into the compile phase instead of the runtime phase.
AOP, as far as my experience goes, is a structural thing. Again, it doesn't change anything fundamental about how programs are evaluated but rather how the pieces of a program are assembled.
You beat me to it. That's exactly what I was thinking. It would be rooted in equational logic or something like that. The K Framework they did the C semantics and KCC compiler in comes to mind. There's also term rewriting languages like Stratego that do everything that way. LISP's as used by Alan Kay et al are like a hybrid of that, functional, and imperative programming that made for great productivity with DSL's.
So, it may be a subset of something else or its own paradigm. Worth people thinking on. Plus, I think the style doesn't get enough attention given the results I've seen its practitioners pull off with little code.
You’re ignoring the point of AOP. If writing Hello world is a cross-cutting concern as its placement in an aspect would seem to imply, then what are the core concerns of the solution? How do those core concerns differ from those that cross-cut? AOP can only really be applied when you’ve answered that.
This is like saying that a car isn’t a -real- vehicle because it can’t fly in the sky without a road beneath it. It’s silly.
I am stressing it to highlight the fallacy of
considering AOP a "programming paradigm".
AOP was not presented as 'a "programming paradigm"'. It serves as a novel example of "Generative Programming", which is the paradigm I mentioned.
The reasoning behind including AOP in the examples is based on the ability to use it to produce, manipulate, and/or otherwise enrich program flow independent of the assets being processed. Thus its "generative nature", such as when environments use declarative constructs (such as Java Annotations[0] or C# Attributes[1]) decorating functions/methods/types to manage DBMS transactions.
I wonder if the author would consider genetic programming (generating random code and executing it to compare it against a fitness function, sometimes used in computer game bots) an example of their "missing paradigm" - it fits the description they give.
* where the value of an expression is computed, usually close to the lambda calculus.
* or to put it differently: where the fundamental operation is reduction (of applicative terms).
ImperativeProgramming
* where cells in some sort of memory are filled and overwritten with values. Inspired by the TuringMachine and curiously never mentioned in the above table (or to put it differently: where the fundamental operation is assignment.)
* Actually, this generalizes to the fundamental operation being communication. The common case is simply communication to a cell maintained by some memory service (you send either a 'set' or a 'get', or possibly an 'apply-function' as needed for atomic ops or fast XOR processing). The more general case can consist of sends and receives on a fully networked, distributed model.
* * (different author replying) Actually, this "generalization" sound much more like the ActorsModel, which is no where near ImperativeProgramming, actually the actor model is much more similar to FunctionalProgramming than it is to imperative programming.
LogicProgramming (and ConstraintProgramming)
* where a solution to a set of logic formulas is sought; very declarative and incorporating some sort of search strategy like backtracking.
* or to put it differently: where the fundamental operation is satisfaction of a predicate.
* ConstraintProgramming envelopes LogicProgramming, since any logic domain can be expressed in terms of a constraint system, but the inverse is not often easy to express (due to the more strictly typed variables in LogicProgramming). However, the fields are disparate enough to have been split into LogicProgramming, ConstraintProgramming, and ConstraintLogicProgramming. (ConstraintAndLogicProgramming). This sort of programming is also called 'DeclarativeProgramming', but the word 'declarative' is somewhat overloaded.
That's it! Everything else is built on one of these three paradigms (Functional, Imperative and Logic), while sometimes incorporating elements of the others.
This article it good but misses the most common programming paradigm - Excel programming. Not all programmers know this powerful paradigm and tend to write code instead of solving problems.
I feel like some sort of Deep Learning oriented programming might be the "new logic programming". Instead of the horn clause engine you get the DL engine. Essentially you have a "universal mapping function" and "parameter fitting". Instead of defining facts you provide data, instead of rules you provide mappings and you also get the "inference for free" (which is sort of the battle cry of Prolog). Instead of logical deductions you get probabilistic deductions. Are there any dedicated machine learning/deep learning languages or DSLs that work at this level of abstraction?
I reckon that's closer to "goal-based programming" (the proposed "missing" fourth programming paradigm on that page). Basically (as I interpret it at least) defining test cases / behavior specifications for what you want the software to do, then letting an artificially-intelligent sort of "autoprogrammer" automagically generate a program that conforms to that specification.
Software like Cucumber [0] might be part of that particular paradigm once the missing pieces around AI/ML take proper form.
In the categorization presented in the OP, SQL would be categorized as "constraint programming" as described here[0]. Essentially, SQL conforms to a subset of predicate calculus[1] whose operations are quantified by relational algebra[2].
As originally stated, maybe.. but for years SQL has been slowly degenerating into just another imperative language. The ideal is that you can write something like "select * from A,B where A.something = B.something". But for whatever reason, SQL interpreters have consistently and dismally failed at producing an efficient execution plan without the hints offered by nested select statements, explicit joins, and now stored procedures. Imperative is still king.
This is a question I've thought about a lot this year. How many programming paradigms are there? How will we know if/when we've discovered them all? Is there a way to arrange them so that gaps in the table predict yet-to-be-discovered paradigms in the same way the periodic table predicted elements or the square of R/C/?/L predicted memristors?
After reading some of Greg Egan's hard science fiction that explores the consequences of a universe where time is a spacelike dimension or another where there are two timelike dimensions, I wondered if programming paradigms could be categorized according to what kind of world lines data or variables may have.
For instance, in some mathematical spacetimes, the future looping back on the past is not a construct you can create. In others (such as Egan's Orthogonal universe), it is quite possible for world lines to arbitrarily loop back on the past. This could be analogous to in a programming paradigm whether statements not yet reached can affect the statement you're looking at. For instance, whether a logic program supports constraint satisfaction.
Mutability and immutability could be analogous to various ways of resolving the Grandfather Paradox. If the result is the timeline is changed/overwritten, that is analogous to a mutable programming paradigm. If the result is that you either are prevented from doing it or doing it results in spawning a duplicate timeline where history is changed, then that is analogous to an immutable or functional paradigm.
Finally, it should be noted that the program and it's entire execution state-space could be regarded as a "four dimensional object" the same way you could consider the universe + its history as a four dimensional mathematical object. (Not really "4" because the space of the program isn't necessarily 3 dimensional, but using the term as a metaphor.) In this sense, you could see the actual execution i.e. the implementation of the execution of the program as a vector or walk through this time-state-space that does not necessarily have to follow the (human) conceptual vector of time through the program! (Not to mention the lexical order of the program text.) For instance, a SQL engine might build a query plan that approaches the execution of a SQL "program's" time-state-space in a much different direction and order than what a human would call the "time vector" through the program.
In Egan's Orthogonal universe he deals with the question of how can living beings experience local time when a universe has all 4 dimensions as spacelike by making a distinction between the arrow of time defined by entropy versus "timelike" dimensions. In our universe the entropy arrow of time usually aligns with the timelike dimension, but in the Orthogonal universe the entropy arrow of time could point along any dimension, but which dimension is the "time" dimension is set by the combined entropy of the local surroundings. Similarly, in a programming paradigm you can choose to define the "time dimension" in many different ways, but there is also an entropy arrow of time.
As an example, imagine a simple imperative programming language that conceptually executes from top to bottom. You could, in principle, make a bizarre implementation that executes backwards, starting with the last instruction and all possible result states, and searches for precondition states that could have caused that result. Repeat for the second to last statement for all of the candidate results, and so forth, until we reach the most initial state.
It's clear that this is possible to implement if the language is restricted enough (for instance if it has few states, or if the only allowed operations are of a certain type). It's also clear that the big-O time/space complexity of this implementation strategy is astronomical for most complex programs. But it is instructive to look at why this is: it is because this time-reversed execution implementation strategy is going backwards against the entropy arrow of time! This is reflected in the enormous amount of energy it would require to compute (drawing on the entropy of an outside system's energy source to reverse local entropy), and/or the gargantuan amount storage it would require (increasing the entropy in space of an outside system in order to reverse local entropy).
Yet at the same time, no one would bat an eye at certain classes of "programs" "running backwards", such as a linear equation solver, or a layout engine, or a logic constraint solver, etc. (One wonders whether a nominally imperative assembly language for a machine based on reversible computing would also fall into this category.)
What I'm getting at is that there are myriad "potentially timelike" dimensions in programs: lexical order, call stack, real execution time, the programmer's conception of how time flows in the conceptual program, etc. And there's also an entropy arrow of time related to how hard/easy it is to reorder the operations. Finally, I view the execution implementation as sweeping a hyperplane (or hypermanifold) through the four dimensional time-state-space of the program & the result of its execution. Depending on the ways that the time-state-space of the program are connected (or connectible) determines along which direction(s) that sweep may or must proceed.
Programming paradigms may be understandable as rules governing how this time-state-space may be connected up, which in turn affects what the entropy arrow of time looks like in execution-implementation-space.
It would be awesome if he/she used whichever one would best enable it to actually withstand being posted on hacker news and still work. I'm not even sure what its doing while its playing that loading animation because it ought to be static content one would think. Unfortunately it never loads and is also not available in the internet archive. Anyone have a copy of the text?
Waste of time. Sounds like an architecture astronaut. First he mentions Functional programming and then under imperative he babbles and then says this:
Actually, this "generalization" sound much more like the ActorsModel, which is no where near ImperativeProgramming, actually the actor model is much more similar to FunctionalProgramming than it is to imperative programming.
One of the important things to remember when reading C2.com pages is that they aren't written by a single person. They're more akin to wikipedia talk pages without signatures (variations of formatting are the hints to "someone else wrote this").
Reading it without this background may seem like the writings of a ranting crazy person. With the federated wiki remodel, this history was flattened.
The "Actually" part is likely a different author. And another author after the {hr}. And the bullet points under the missing padaradgim are other authors. Then another author for the hr block, and another author after the next one, and then yet another author at "I don't know..." and another author italicizing, and then one that signed as top and... so on and so on.
Indeed. C2 was the first wiki, and these implementation details show why for a long time such a promising concept never went as far as it should have. The standard 'template' approach of wikipedia pages, and the clarity regards edit history were a great step forward.
If you look at an archive.org of the old site - http://web.archive.org/web/20160709091504/http://c2.com/cgi/... for example, at the bottom the edit date is a link. That took one to the edit history showing the IP address that made the change and the diff. It was also something that was robots.txt'ed and so isn't in archive.org.
C2 was the very first wiki. It has a great deal of historical significance.
"CS speak"? Having a formal education in CS should not be derided. Algorithms, data structures, and other components of computer science are important, and programmers lacking a knowledge of these areas can't really be trusted to work in the more demanding corners of our industry. (Many interviewers select based on these criteria.)
I didn't mean to deride CS terminology, but in TFA it's incoherently thrown around like someone who wants to sound like an expert but can't make solid case for anything and keeps changing their mind about what concepts even apply. I'm not sure where you were going with the trust and interviews thing, but I wouldn't hire the author of this piece (turns out its not one author though, it's a wiki).
Imperative programming is programming with time. Functional programming projects time on to space. When you add concurrency, either effectively becomes logic programming: Search runs multiple spatial processes in parallel. Constraint satisfaction treats each constraint variable as its own timeline.
We only don't think of it this way because we've traditionally linearized logic programs. You can linearize search with backtracking and you can linearize constraint satisfaction with a constraint propagation queue.