Abstract: Software designers compose systems from components written in some programming language. They regularly describe systems using abstract patterns and sophisticated relations among components. However, the configuration tools at their disposal restrict them to composition mechanisms directly supported by the programming language. To remedy this lack of expressiveness, we must elevate the relations among components to first-class entities of the system, entitled to their own specifications and abstractions.
This 1994 paper is the important bit rather than the 2019 post. What it describes isn't much different than the interfaces that we have in operating system loadable modules or application-ecosystems with plugins. The only distinction is the requirement of first-class status. This would require a unification of language, operating system, and application. The closest I've seen to something like that is NeXT with Objective-C and the NS framework/Kits. The evolution of that into macOS/iOS, Swift, and *Kits are less so.
The other thing that comes to mind when reading this is the adage about any configuration file format evolving to become a poorly implemented subset of lisp.
As long as software development is a commercial endeavor I don't see how we'd ever end up with full interoperability with the goals being counter to it.
> This 1994 paper is the important bit rather than the 2019 post
Absolutely agreed, in terms of significance! But a slightly distinct point.
> The only distinction is the requirement of first-class status.
But an important distinction!
We can currently build this. Which is good. But we cannot express it directly. This is bad, as this aspect is getting more and more important.
> The closest I've seen to something like that is NeXT with Objective-C and the NS framework/Kits.
Also 100% agreement! And probably not a coincidence that this is my "home". Also important: Brad Cox's book about Objective-C, which is very architectural and goes far beyond the language. And shows you what the language is trying to accomplish. Alas...
> The evolution of that into macOS/iOS, Swift, and *Kits are less so.
Exactly. :-( We seem to be moving backward instead of forward.
> The other thing that comes to mind when reading this is the adage about any configuration file format evolving to become a poorly implemented subset of lisp.
I think that's two adages: (1) all config file formats evolving into cobbled together programming languages and (2) any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp. [1]
This ties into one of the few points where I actually disagree with Mary Shaw, otherwise all I am doing is implementing her vision. Well, my feeble misinterpretation of her vision. Anyway, my disagreement is that I do not believe ADLs and programming languages are distinct, and I also believe that this belief that they are distinct is what's been preventing all this insight into software architecture from being as useful as it should be. Hence Objective-Smalltalk. (ArchJava was another foray, that had other problems, again IMHO).
I could see how the same language could be used for ADLs and programming, but that implies programming all done in one language which is the part I don't see happening. I don't even know that it's desirable. I can't think of a language that I'd want to use for everything.
Being possible isn't even the hard part--it's motivating everyone to use it. The trend has been to own an aspect that is controlled by the first party and limit interoperability.
> I could see how the same language could be used for ADLs and programming
I don't buy that there is an actually meaningful distinction between "ADL" and "programming". That's the point.
Programming today is almost entirely composing existing procedures/functions/object/modules/libraries/programs in order to build a system.
> but that implies programming all done in one language
Why on earth would it imply that? It's not like there currently is only one "general purpose" language or just one "integration language".
Objective-C is also a language that combines the integration mechanism ("Objective-") with the "programming" part ("C"). See Brad Cox's writing. But why have the integration mechanism be only messaging? Why not support others as well?
And despite Objective-C combining these two aspects, there are other languages. And in fact, Objective-C is particularly good at talking to other languages. Objective-S is that, and also particularly good at talking to filters/pipelines (or being placed in a Unix pipe) and also particularly good at talking to REST servers via HTTP, and at creating modules that are accessed via HTTP by anyone. And at talking to filesystems and exporting components as FUSE servers.
So I really don't get where this There Can Be Only One™ Uber-Language idea is coming from. It is the exact opposite of reality, at least in this case.
> I can't think of a language that I'd want to use for everything.
I can :-). For some specific definition of "everything". Once I get the native compiler fully working.
> [hard part is] motivating everyone to use it.
Motivating people to use stuff definitely is a hard part, but not sure why it has to be "everyone". But the way to get there is to identify a problem that is causing sufficient grief and provide a solution that is sufficiently compelling. I think I'm getting there. And then talk about it.
I've said a few times that modern microservice architectures feel a lot like what traditional object-oriented programming was about, and I think this is another confirmation of that. I think that these days the connectors are JSON over HTTP, RPC and things like that. When a service exposes a JSON over HTTP API, I can use it from any programming language, from any program.
So "first class connectors", to me, would be something like a language describing how to interact with a service, and a code generator that takes that language and transform it into code in your language. For example, the aws-sdk-rust [1] says:
> The SDK is code generated from Smithy models that represent each AWS service. The code used to generate the SDK can be found in smithy-rs.
To me, Smithy with a code generator looks like a connector that would increase your productivity. But I'm not sure if we can call something that relies on code generation "first class". I'm not sure what "first class connectors" would look like either.
Very cool - thanks for sharing; I'd not seen Smithy before (and was quite curious how AWS manages it "all"!)
From the web page:
> A Smithy model enables API providers to generate clients and servers in various programming languages, API documentation, test automation, and example code.
This blog reminds me of something John Carmack calls architecture astronauts. It talks about "glue" / connectors integrated with a programming language, which at a high level, sounds really nice. But I am skeptical of how well one could implement it in practice. Interoperating is far from standardized and I definitely want concrete examples illustrating what "Architecture Oriented" is in code and not theory. I hope there is a part two for this.
> Objective-S is the first general purpose programming language.
You should consider removing this claim, since it is a flat-out lie, or at least a willful disregard for the language that other people use. If you want to communicate what is novel about this programming language, you need to remove this. It heavily detracts from the architectural mechanisms that you start to explain on the "About" page (http://objective.st/About) which seems to be the actual meat, where you could flesh out further explanation.
It's not a lie, though it obviously is controversial.
The languages we call "general purpose" are nothing of the sort. They are Domain Specific Languages for the domain of algorithms.
We press them into service for all sorts of things they aren't actually very good at, hence the problems. We do so because we don't have any alternatives.
Took me a long time to figure this out. And an much longer time to communicate it concisely. So yes, I can deal with it not being immediately obvious.
Just my opinion: this sentence really does you no favors. I read it, then re-read it because I wasn't sure whether I missed a word, and then I closed the tab, because writing something like this seems incredibly dishonest.
I always thought CS theory misses out on the more interesting area of computation. As if algorithms are just one "dumb" way to map the giant reality of creative potential into itself. Practically speaking it often fails to do that, despite our theoretical understanding...
Even though your language may be "turing complete" like other "general purpose" languages in theory, if it elicits more interesting algorithms and creative expression, I do kinda get your point.
True. That doesn’t answer my question though. Can you give an example of a practical problem you can solve in Objective-S that you can’t solve in (say) C++?
1. Anything that can be solved in any one Turing-complete language can be solved in any other Turing-complete language.
2. Both of these are Turing-complete languages, so obviously anything that can be solved in one can be solved in the other.
3. Lots of domain-specific languages are also Turing-complete, so anything that can be solved in C++ or Objective-S can be solved in them and vice versa.
4. It thus follows that (a) "can be solved" vs. "cannot be solved" is not a useful criterion for distinguishing between Turing-complete languages. Or that (b) there are no relevant differences between Turing-complete languages.
Another thing to keep in mind is that the usefulness of "glue" depends on what it can stick. For example function composition is very useful in functional programming, precisely because (first-class) functions are used for so many things in FP.
Relatedly, "composition" (of functions or anything else) is a powerful idea precisely because the result of a composition is the same kind of thing as the components; hence we can keep composing more and more.
> Therefore, to increase one’s ability to modularize a problem conceptually, one must provide new kinds of glue in the programming language.
> He just made the important point that "one must provide new kinds of glue" and then the amount of "new kinds of glue" is the smallest number that actually justifies using the plural.
The real issue is the appeal to authority. John Huges didn't make a point; he made a claim.
Specifically "new kinds of glue" is a troubling phrase, that has likely been misused more than just this blogpost. The statement implies there are more than was common at the time of writing (maybe?). It doesn't specifically mean there need be more than lambdas, closures, and mixins or more implementations of the same in different ways (runtime composition via aspects, copy-paste composition via traits, etc).
I would think this needs more study, as it's more of a casual observation than a strong claim. Was Perl the most modular? Maybe so. I don't think that made it the most useful, implying an upper boundary, as Marcel noted.
I count > 100 leaf nodes in their classification tree. Not sure I agree with all of them, and there are probably others they missed. But more than 2 is trivially fulfilled.
> Was Perl the most modular?
Hmm...Perl wouldn't come to mind as something with a way for a user to define and refine new connectors.
> Perl wouldn't come to mind as something with a way for a user to define and refine new connectors.
Anyone can add a new features to a language that users want to use. The source of the new connectors are incidental (user defined vs native syntax). That aside, operator overloading is fits the specific "user glue" you are focusing on. Perl added this in 1999. Perl pretty much had everything (Moose object system, etc).
> That aside, operator overloading is fits the specific "user glue" you are focusing on
I thought so as well, for a long time, but it turns out not to be the case, because you can't properly do "connection" via call/return. You can do it, but not "properly".
The Problem: Architectural Mismatch
------
The crux is that all mainstream and virtually all non-mainstream languages aren’t really general purpose, though we obviously think of them and use them that way. They are DSLs for the domain of algorithms, for computing answers based on some inputs. Obviously with many variations, but the essence is the same. a := f(x). Or a := x.f(). Or a: x f. And the algorithmic mechanism we have for computing answers is also our primary linguistic mechanism for structuring our programs: the procedure/function/method. (Objects/classes/prototypes/modules are secondary mechanisms).
(see also: the ALGOrithmic Language, predecessor to most of what we have, and allegedly an improvement on much of that).
As long as our problems were also primarily algorithmic, this was perfectly fine. But they no longer are, they are shifting more and more away from being algorithmic.
For example, if you look at how we program user interfaces today, there hasn’t really been much progress since the early 80s or 90s. Arguably things have actually taken huge leaps backwards in many areas. I found this very puzzling, and the common explanation of “kids these days” seemed a bit too facile. The problem became a bit clearer when I discovered Stéphane Chatty’s wonderful, and slightly provocatively named paper "Programs = Data + Algorithms + Architecture: consequences for interactive software engineering".
He explains very well, and compellingly, how our programming languages are architecturally mismatched for writing GUIs.
Once you see it, it becomes really hard to un-see, and this problem of algorithmic programming languages, or in the architectural vernacular I prefer, call/return programming languages pops up everywhere.
Or how Guy Steele put it:
"Another weakness of procedural and functional programming is that their viewpoint assumes a process by which "inputs" are transformed into "outputs"; there is equal concern for correctness and for termination (and proofs thereof). But as we have connected millions of computers to form the Internet and the World Wide Web, as we have caused large independent sets of state to interact–I am speaking of databases, automated sensors, mobile devices, and (most of all) people–in this highly interactive, distributed setting, the procedural and functional models have failed, another reason why objects have become the dominant model. Ongoing behavior, not completion, is now of primary interest."
---
I also thought plain old metaprogramming facilities would do the trick, but in the end those are also insufficient. (For example, my HOM was entirely done via metaprogramming).
Architecture Oriented Programming is a generalisation of call/return programming, not an extension.
> He explains very well, and compellingly, how our programming languages are architecturally mismatched for writing GUIs.
I'm not sure I agree this was the takeaway. It does highlight that handling of concurrency is percieved as anathema to an algebraic perception of realtime processing by some. Just as one might say that programming languages are architecturally mismatched for writing a biological system, GUIs are a more constrained exercise of the same concepts. I believe this is largely related to a "distributed setting" and the "ongoing behavior" that Steele refers to.
(^4.3 paraphrased) GUIs are a set of small programmatic interactions that represent state manipulation.
Shortly thereafter:
> With interaction, state is essential in behaviors and the locality principle
would require that all code that changes it is grouped. That pushed researchers to propose programming patterns based on finite state machines, Statecharts or Petri nets, but: when using a computation-oriented language, the transitions are implemented as functions or methods and the principle of locality is not met;
Transitions (eg text fade-out) are not implemented as functions, as everyone knows. Transitions are performed concurrently alongside the state change. Losing control of the transition (or not being aware of it) by initiating a form of parallelism is not optimal to a developer who wants to maintain control, but is undeniably effective in performing the work. The fact that any module (say a label's fadeOut method) can behave concurrently at any time is a side-effect of information hiding, rather than a loss of main/subrouting empiricism. It does require a new set of concerns and architecture additions, rather than a full-blown teardown.
P.S. What I always found most interesting about that paper is the side-criticisms about encapsulation.
>> our programming languages are architecturally mismatched for writing GUIs.
> I'm not sure I agree this was the takeaway.
Hmmm...he says so quite explicitly. For example, Section 4:
"4 Understanding mismatches
We now propose a few reasons why architecture patterns proposed at level 3 for interactive software display incompatibilities with those offered by most programming languages."
Not sure how much more directly this could be expressed.
> ...handling of concurrency...
Concurrency is just one of a whole number of architectural issues. In the abstract, he writes: "Among these are the contra-variance of reuse and control, new scenarios of software reuse, the architecture-induced concurrency, and the multiplicity of hierarchies."
So it's not even concurrency as a stand-alone issue. It's just an artefact of the architectural mismatch.
> ... state manipulation ...
Yes, which is why the idea that FP is a good match for GUI programming is...er..."special". Which is reflected in the hoops that you have to jump through to make it work at all.
Not sure what you're trying to say with the bit about transitions. You obviously have to encapsulate the GUI and separate it from the Model. MVC, yay! And not group little bits of GUI+model together, that way madness lies. And bugs.
The core of MVC is essentially just a set of dataflow constraints, so can be expressed compactly, elegantly and correctly using a different architectural language, one that allows that type of connectors.
I will start with a couple clarifications, in good faith. I will admit that I have made lazy errors in this discussion. Specifically in regard to terminology. Afaik, there is no term for "temporally unsynchronized" other than parallelism or concurrency, and I have misused the term concurrency at times. I have also used the term transition to mean "asynchronous state transition". This may clarify my past statements, but I will attempt to do better, going forward.
So I'll visit on the material you are most familiar with so that my comments might have the most specific impact or irrelevance.
To begin, I agree on the concept. If "constraining architecture" means any separation of concerns or interface.
To demonstrate that I did read through the code snippets you provided and completely irrelevant to the points being discussed,
I believe there was an error in the code you posted in Listing 8@p.136 -
- k: degreesK {
self basicC :( degreesF - 273.15. \*wat?\*
ivar : ui / celsiusTextField / intValue := self c .
ivar : ui / fahrenheitTextField / intValue := self f.
}
I also thought the implementation was a little muddled and would implement a solution similar to this^.
^I don't have all the necessary syntax for Objective-Smalltalk, I have munged in some Smalltalk and made up some dictionary derefence syntax. I other languages, I would construct this differently, with the same overall strategy.
After digesting the material, I do not think about the concept of architecture in the way you or Chatty or Backus have been presenting it. In support of this thinking, the idea of seeking the proper GUI abstractions has been sought by most programming language communities. ReactJS (uses HOM like Objective-Smalltalk) or a vanilla actor model^^^, is considered the current state-of-the-art. The MVC layers are most popular in areas where inputs are relatively static within a thread (eg web backend software). In opposition to this thinking, I believe Software Architecture emerges from a combination of mental modeling, language syntax, supported data exchange formats, and resource dependencies/components^^. To some extent, Chatty recognizes this; "We highlight a strong coupling between languages and architecture,..."
Backus (https://dl.acm.org/doi/pdf/10.1145/359576.359579) proposed that FP was possibly preferable. The main problem with presenting FP as a different architecture, is that it's just a bunch of abstractions on top of VNA (you might enjoy watching https://www.youtube.com/watch?v=otAcmD6XEEE). Chatty references Turing in 3.1.1@p.6. Ofc Lisp and Occam rely on implicit grouping, in the form of blocks (or any other lexical scope). Fundamentally, FP vs any other style is all the same implementation running on the same kind of machine. A second problem, having to do with specific purpose, is that the side effects in procedural are largely a benefit for the median developer. At each interaction with state, a developer can use a program (that is nothing more than some psuedo-algebraic transforms), into an actor via the consequences of the imperative structure. Lastly, the imperative structure is explicit; although this has started to break down across modern languages. To demonstrate, when using the FP "apply(...)" in Javascript, you can generate outputs that have no explicit inputs (eg https://www.w3schools.com/js/js_function_apply.asp) breaking the input/output paradigm. This is perfectly normal in FP, which (like the Java Spring Framework) makes VNA code harder to reason about by introducing composition of unrelated structures resolving at runtime. This is a step backward for maintainability, reasoning, testing and formalizing. I cannot be convinced that FP is "the future" or is a different software architecture in any meaningful sense.
As an aside, Erlang and Haskell, which are the most feature rich FP languages around, aren't popular for GUI development. If there was a tangible benefit, I believe it would have been demonstrated.
^^https://resources.sei.cmu.edu/asset_files/TechnicalReport/19... - M. Shaw's diagrams supposes that design is focused on components. This is partly incorrect. SDA Diagrams, uniformly track information flow, which necessarily involves a multiple components for information to flow between.
^^^"And not group little bits of GUI+model together, that way madness lies." - almost every game engine and GUI framework uses it. Heck, even the Objective-Smalltalk program is taking inputs from non-trivial widgets, which are precisely small GUI+model underneath.
> I also thought the implementation was a little muddled
Yes! That was the point of that section of the paper, that doing the straightforward thing leads to a muddle:
"At this point, it becomes clear that our original strategy of updating the UI from the individual setter methods is probably not tenable in the long run."
A little later:
"As we have seen, even a conceptually very simple application such as a temperature converter quickly attracts significant complexity with non-obvious trade-offs once the requirements of an interactive version of that application are taken into account."
The fact that you need to create these centralised updater methods is non-obvious, and leads to a bunch of related non-local changes. Also a muddle.
"This complexity is not the result of a complicated domain model, but rather of the architectural embellishments required to move data from location to location in order to keep the different parts of the application (model, user interface, persistence) synchronized. In the next section, we will look at a mechanism for simplifying this kind of overhead."
> ReactJS (uses HOM like Objective-Smalltalk)
ReactJS uses Higher Order Messaging? Where?
> Chatty: "We highlight a strong coupling between languages and architecture,..."
Which was my point, yes, thank you! We are limited in what languages let us express cleanly. React/SwiftUI etc. are workarounds for this issue: they try to re-cast the GUI "problem" as a simple function/procedure. "The UI is a pure function of the model". If it were true, that would make things very simple. Alas, it isn't true: https://blog.metaobject.com/2018/12/uis-are-not-pure-functio...
But even without being true, it makes things very convenient, particularly for simple cases.
> feature rich FP languages around, aren't popular for GUI development. If there was a tangible benefit, I believe it would have been demonstrated.
Yes. There isn't. If you look at the papers, you will see that the benefit isn't so much claimed as assumed: "now we can get the wonders of FP for nasty GUI development".
> "And not group little bits of GUI+model together, that way madness lies." - almost every game engine and GUI framework uses it.
Yep, and that's why they all share the same problems (roughly "Massive View Controller"). It's not a trivial issue, because reusable widgets definitely make sense, but they lead you down a nasty path. https://blog.metaobject.com/2015/04/model-widget-controller-...
You need to treat those widgets as easy-to-use Views, and interact with them largely as if they were just views.
> It means you work with constraints, for example dataflow constraints.
I'm not sure anyone else would call that anything but an interface (not an "implements programmatic interface"). I was being gracious with separation of concerns, because a SoC implies a dataflow constraint that isn't necessarily enforced only by mechanism, where there can also be convention.
> What's VNA?
von Neumann Architecture
> The fact that you need to create these centralised updater methods is non-obvious, and leads to a bunch of related non-local changes. Also a muddle.
Synchronization requires synchronous operations. There are refresh rates physically and render methods virtually. This is obvious and not adhering to that is unmaintainable, due to confounding variables. If there were {N} programs to manage {N} components under a singular view, how is messaging handled? Messaging requires synchronization, which requires something akin to a render().
You have physical limitations on how many states can be maintained at any given moment and mapping limitations ie 2^3 (DNA) > 2 (Turing) states giving you more complexity, which is still bounded. An underlying dataflow constraint is still fundamentally a system that has to transfer the same information.
>> It means you work with constraints, for example dataflow constraints.
> I'm not sure anyone else would call that anything but an interface
I am sure that nobody calls that an "interface". First, a dataflow constraint is nothing like an interface (for one it is an implementation). It's also a standard term of art.
> This is obvious and not adhering to that is unmaintainable, due to confounding variables
It's actually not obvious, as shown in the paper. And it is not intrinsically necessary. It is necessitated by the architectural mismatch.
> Messaging requires synchronization, which requires something akin to a render().
Exactly. Messaging pretty much requires this, or more generally "call/return" requires this as a user-visible and user-defined construct, due to to the mismatched architectures.
Dataflow constraints, as shown in the paper, do not require anything like this at all. A more appropriate architectural style makes the requirement vanish into nothingness.
> An underlying dataflow constraint is still fundamentally a system that has to transfer the same information.
Exactly, but it allows us to structure those transfers in a vastly more straightforward and obvious manner, as demonstrated in the paper.
I read the Shaw 1994 paper in full, and have a rebuttal paper that makes me confident the author's goal of an architectural programming language will fail.
The paper is A Note on Distributed Computing (1994) [1], and a choice quote is:
Historically, the language approach has been the less influential of the two camps. Every ten years (approximately), members of the language camp notice that the number of distributed applications is relatively small. They look at the programming interfaces and decide that the problem is that the programming model is not close enough to whatever programming model is currently in vogue (messages in the 1970s, procedure calls in the 1980s, and objects in the 1990s). A furious bout of language and protocol design takes place and a new distributed computing paradigm is announced that is compliant with the latest programming model. After several years, the percentage of distributed applications is discovered not to have increased significantly, and the cycle begins anew.
Even in 1994, failures at achieving this dream were cliché. I will not resummarize the paper, but I will explain the crux of the problem with architectural languages.
The problem with this dream is that to achieve it, (1) everything would need to be rewritten in a single language and (2) to support remote distribution, every procedure/interface/method would need to have full remote semantics. Many languages have tried this, including Smalltalk-sympathetic object message-passing languages like Erlang or Akka Actors. What you quickly realize when working with these languages is that the semantics of ordering, retries, idempotency, synchronization (all of the concepts that fit under the CAP theorem) are not easily expressed in a language. If you have anything remote whatsoever, whether that's an API or a database, the whole thing breaks down. Naive programmers using languages with message passing that say "when you need to scale, you can just make that function remote" find out very quickly that you cannot.
The industry solution to this problem is one of interfaces and patterns, not all-encompassing languages. Use of an RPC or serialization data format (even heterogenous ones!) across subsystems provides transport glue, and another piece of software can provide semantics to make software work in concert together. If systems are connected together actively, that software is called an orchestrator, and it makes all the calls directly. If systems are connected together passively, the subsystems themselves engage in choreography, and need to support a partial protocol of idempotency, awareness of ordering, retry semantics, etc.
> The problem with this dream is that to achieve it, (1) everything would need to be rewritten in a single language
Not true. Look at COM. It has a reputation for being "ugly" and "legacy", but having worked with it, it's a marvel of engineering. There's still no "modern" system that can compare with it, IMO. In a C++ project of mine I implemented a COM server (not _that_ difficult) and could control the program from Powershell "for free". IOW, I got scripting with no glue, no bindings, nothing -- just instantiate the component from Powershell and invoke methods on it. (Courtesy of IDispatch, default implementation provided by ATL.)
> (2) to support remote distribution, every procedure/interface/method would need to have full remote semantics.
COM and DCOM implemented that. Where it came short -- or where I failed to find the documentation -- is about customizing timeouts of remote calls. I found some documentation about defaults, but, IIRC, the setting was system-wide and not applicable to a particular "connection".
COM is an abject failure when you're dealing with a remote system. Like, I challenge you to find anyone with experience in heterogeneous systems software that thinks COM functions well.
And so if we're talking about software that will always be hosted one one machine and memory space, fine, this can work.
The 1994 paper brings up Spring which was implemented in a successful way with DI and AOP principles later in Java. It supported things like transactions across thread contexts, modules, etc. and those concepts were later adopted in other VMs like RoR or any other framework that used components + thread propagation.
The problem always comes down to this megalomaniacal vision of a "unified"-anything. Aside from the agreement problem of adopting those concepts in new and existing languages, some of these concepts simply cannot be adopted in certain languages: thread context propagation, concurrency mechanisms, error handling. And even if we did not have the limitations of certain languages not supporting certain behavior, we have the problem that composition semantics are deeply rich in a domain.
Creating primitives for composition and execution of code that persist across languages and systems does not work without consensus adoption of a language. The same reason why I can not go into IntelliJ, right-click, and "refactor" across my service architecture is the same reason why COM failed and why this will fail.
Hmm...interesting that you view this is a "rebuttal", as I generally regard it as further bolstering my point. I am certainly aware of it :-)
First, I don't view Architecture Oriented Programming as being primarily about distributed programming, and so far my work at least has not been focused on it at all, though that is starting to change.
Second, "everything would need to be rewritten in a single language" is not where I am going. At all. In fact, not requiring a single language is one of my central tenets, and particularly one of the reasons I reject Smalltalk-80, which very much wants everything to be in a single language in a single image.
Third, "to support remote distribution, every procedure/interface/method would need to have full remote semantics". Nope, not at all. Only if you have a monomorphic view of language. The "Everything-is-a-"-ism. Also, almost all of these techniques went local-first, and then tried to add distribution. Which gives you the problem that remote semantics are not a trivial extension of local semantics. If you read the Note carefully, you will, er, note that it never talks about the reverse case, going from remote to local. For good reason, because the reverse case is fine.
Which is one of the reasons In-Process REST works so well: it takes a distributed architecture and applies it (not 1:1, gotta adapt!) to the local case.
Anyway, once you figure out that you can have polymorphism not just at the data-modelling layer, but also at the language meta-layer, you also start to understand that local/remote all-encompassing/not are false dichotomies. We have commonality and variability analysis, let's use it! These different mechanisms have some things in common, and others where they are different. If we open the language up sufficiently, we can capture both the commonalities and the differences in a way that is convenient and flexible.
So when we get a new interaction mechanism, we write a new connector. Done.
Going back to TFA and the John Hughes paper it quotes, the situation rightly critiqued in the Note is that of having one or maybe two "connectors" canonised into the language, one kind of glue. But as I write, we need lots, not one or two. So adding different kinds of connectors is the defining feature of Architecture Oriented Programming, and the exact opposite of the situation of baking one or maybe two into the language.
And those connectors need to be pushed down into the subsystems. This is the problem. Whatever the language used, you've effectively described a shared library, and one that needs to be handled by external cooperating services (including data stores).
> For good reason, because the reverse case is fine.
It is not, as per my above comment. Consistency requires bi-directional participation between actors if done at a uniform language or interface level, and good luck getting MySQL or Postgres to push your semantics into their software.
For any HN users who are reading this and think "hey, writing everything in one language is totally fine!" please consider the first day you acquire a company / are looking to be acquired by another company that didn't use your language.
You're creating a false dichotomy. Shared libraries are totally fine.
> It is not, as per my above comment.
Yes it is, as per my comment.
> good luck getting MySQL or Postgres to push your semantics into their software.
No luck needed as there is no such requirement.
> "hey, writing everything in one language is totally fine!"
Once again: you are completely making up this "everything in one language" bit out of nothing. There is no such requirement or even goal. Though once you find out how nice it is to program in that language, you might want to write everything with it.
> So when we get a new interaction mechanism, we write a new connector. Done.
Interesting idea... but I'm struggling to get the full picture here... (I might be way-off here, but):
Is this about "formalizing" connectors? Such that, once a connector technology comes along (RPC, REST, Web Sockets, GraphQL, who knows what next), such connectors could be implemented on either side of the boundary layers and then "selected" via some architectural language? I think the semantics of each "connector" demands different design constraints on either side of the equation.
An overly simplified example: Java logging "appenders" allow dispatching log messages to different targets (stdout, filesystem, database, etc.) But, STDOUT doesn't "fill up", while file systems/databases do. DB's support querying, while STDOUT does not. DB's do get overwhelmed; should processing halt until such "overwhelmed" situations get resolved? Databases (should) include security; STDOUT does not. Etc.
If there were different "types" of connectors defined, each with different behaviors, and those could be easily chosen, then maybe. But even the "types" would be solution-dependent. Retry rules, idempotency, serialization rules, compensating transaction support, etc. These topics (to me) are far more important than simply creating new connectors.
> These topics (to me) are far more important than simply creating new connectors.
Yes. What this vision requires is pushing down a shared library or coordinated behavior into every new language or future technology that requires it.
If the behavior is not actually required, and is more of a suggestion, then it's a "protocol" and requires interface modifications everywhere. And now we're back to "why do we need this shared code/language?" and "how is this different than patterns?"
This model has been tried many many times. We just got rid of the ignominious stench of CORBA and its removal from Java in the last year after a 25 year failure. The architecture astronauts require their language to be guaranteed across wire boundaries which is a fancy way of saying that their uber-language needs to be implemented in all the subsystems for guarantees to work.
> but I'm struggling to get the full picture here...
Me too, but I've been struggling much longer :-)
> Is this about "formalizing" connectors?
Operationalizing more than formalising, although that is a variant of formalising. "Let's actually program using architectural abstraction", rather than just (variants of) procedural abstraction. Or rather than just using architectural abstraction to describe systems that we then still have to program procedurally.
I'm still completely unsure how my logging example above would play-out. I've briefly looked at your code samples, but still not getting it. I just don't see how this is a "programming language" thing.
> I just don't see how this is a "programming language" thing.
I think you're getting it just fine, because most of it isn't a programming language thing, and shouldn't be. I explain this in more detail in my paper: Can Programmers Escape the Gentle Tyranny of Call/Return [1]
We've really been able to build these kinds of architectures for some time using call/return, and particularly using OO. However, we haven't been able to directly express them in code. Andrew Black put it really well: "The program’s text is a meta-description of the program behaviour, and it is not always easy to infer the behaviour from the meta-description."
The fact that this is so indirect leads to all kinds of mayhem, with one particular kind of mayhem being that good architecture in our current systems necessarily entails indirection, which leads architecture astronauts to the conclusion that it is the indirection that is the good thing, and thus they indirect away.
> I'm still completely unsure how my logging example above would play-out.
For one, it doesn't really require a lot of linguistic support for the parts you mention. In general, the idea is to have as little in the language itself as possible. "Talk small and carry a large class library". But a little bit is needed. For streaming/pipe-and-filter support, which I think the logging would slot into, I use Polymorphic Write Streams [2]. These are just some classes implementing a pipe-and-filter architectural style using a few protocols and some triple-dispatch trickery.
They can be connected to stdout, to variables, ordered collections, to GUI text fields and surely also to logging subsystems.
The linguistic support is fairly minimal: the polymorphic "->" connection "operator", which can be used to connect filters into a pipe but is not specific to filters, and a bit of syntactic sugar that ensures defining a filter is no more effort than defining a method.
So a small script to convert input to upper case looks as follows:
The 'filter' keyword introduces a filter definition, which is just shorthand for a class definition (see the C++ metaclasses proposal by Herb Sutter. This is similar but not as sophisticated).
Then we connect stdin to the filter and the filter to stdout (rawstdout, the default stdout is cooked).
I gave a number of additional examples in my UKSTUG talk:
In a more general form, logging also enables dataflow constraints: any store can be wrapped in a logging store that logs all accesses. That logger can then be connected to a copy-stream that takes two stores and applies the operations that were logged to those stores. But you can also just connect the logging store to stdout (or a database, or a file) for debugging purposes.
There's more, but the central idea is to have as little linguistic support as possible, but of course as much as needed. It would have been nice if that could have been driven to zero, but alas that's not the case.
Fixed link is https://resources.sei.cmu.edu/library/asset-view.cfm?assetid...
Abstract: Software designers compose systems from components written in some programming language. They regularly describe systems using abstract patterns and sophisticated relations among components. However, the configuration tools at their disposal restrict them to composition mechanisms directly supported by the programming language. To remedy this lack of expressiveness, we must elevate the relations among components to first-class entities of the system, entitled to their own specifications and abstractions.