I _agree_ that we should think more about systems design. And I _agree_ that the individual "units" that we're designing would benefit from standard ways of communicating with each other.
The entire thing about "let's not call it a program!" feels like a distraction from the core point, though. I guess I don't really believe that calling the "unit of software development that I'm currently working on" a program harms it or limits it in any way.
Even in the article, the author talks about isolated spaces:
>While the Smalltalk model holds a large number of objects in a single ‘space’ - we could imagine a system where each object has internal space which holds a cluster of inner objects, and so on.
...and so, poof, we now have a unit of software that could easily be called a "program." A program that contains multiple layers of inner program, but there's nothing about the word "program" that limits how it can be applied.
So I think 95% of this article is philosophical semantic gymnastics. There's a seed of an interesting idea -- software would benefit from the ability to connect its pieces more flexibly -- but without an actionable hint as to how to _get_ there, this is just a pointless intellectual exercise.
Changing the word to something else doesn't actually make the problem easier to solve.
> Changing the word to something else doesn't actually make the problem easier to solve.
This is true, but what exactly is 'the problem' in the first place? Framing makes a difference. Do we assume we are designing 'units' that communicate with each other or are there other ways to think about the system?
Is the unit of design also the unit of update? What about cross unit parameters - if we want to change a parameter that affects multiple units, do we need to rebuild them and reinstall them in our network of units? Or can we have a projection of our system that shows this parameter as a single update-able cell? What if this parameter is derived (i.e. not something you fed in as a constant but something that represents emergent behavior)?
I'm imagining projectional editing of the system above, now can we consider the original 'units' just as one possible projection of the system - one lens we can view and update the system through? One view of the system (communicating units) doesn't have to be the primary view anymore. Can I project and 'emergent program' - a composition of various units, but only the slices that I'm interested in? Can I update this in place and affect system behavior?
I find thinking of these as systems helps me ask these questions, while thinking about 'programs' leads to different questions such as - are the 'statically typed', how do they communicate, and so on. It has more to do with the common connotations of the word 'program'. The concerns here are broader than 'standard ways of communicating' between individually designed units.
Say for instance you make the unit of design the unit of update. Then any breaking changes means the unit needs to support both old and new methods of access. This is often appropriate for something like a public API used by thousands of people. But if used at every resolution, it would add too much complexity and need for legacy handling of access patterns.
So if the unit of update is instead a self-contained ecosystem of components, then the components can all be simultaneously updated, reducing their internal complexity. The external interface should still probably be versioned.
And...well, you've just again wrapped up something in that self-contained ecosystem that could very well be called a "program." In fact, it could also be called an object oriented program: Each component is an object that has an API and hidden data.
I'm a fan of static typing, so I'd answer a firm yes to that question. How they communicate -- message passing seems the right paradigm in most situations. So the interesting (to me) solution would be to standardize static types over a message passing architecture. Something like Google's Protobuf? Or JSON with JSON Schemas? Who knows. Depends on the nature of the problem you're trying to solve.
And that gets to one final point: Most programmers aren't capable of thinking in big systemic terms like this. Probably 90% of developers are going to be restricted to operating within a component. I think this is why React and other web client component architectures are so popular today: They are great enablers of average developers (and, to be fair, can be high leverage for great developers as well -- at least up until the point where they get in the way).
A comparison would be a computer system is like a biological ecosystem, whereas a program is like an individual biological entity. Much like biological systems, computer systems and programs are made up of small similar pieces that have defined functionality, and their interactions result in a system. Microbiology results in macrobiological systems, and macrobio results in ecosystems, etc. Program, system, whatever; it's turtles all the way down.
> we can also consider replacing the write-then-run program idea [..] with an incremental model where the programmer successively provides more detail about the desired behavior and a constraint solving system refines the behavior
So, like, test-driven Agile programming?
We do instrument systems, consider strace or https://zipkin.io/
> A comparison would be a computer system is like a biological ecosystem.
Excellent analogy and one that I really like. What are the 'programs of biology'? One way of decomposing this is 'spacial' - e.g. a cell is a program, a limb or an organ is another larger one etc. But what about the nervous system, circulatory system, etc.? Aren't they useful perspectives but they span different units in the spacial decomposition. Can I view and modify them directly? What about updates - consider attaching another limb vs changing the electrical conductivity of neurons - while preserving some consistency and enforcing constraints.
Reminded me of bret victors “toolkits, not apps” comment ( https://mobile.twitter.com/worrydream/status/881021457593057... ) and an excerpt from the smalltalk era : https://www.youtube.com/watch?v=AnrlSqtpOkw&t=4m19s
Yes, apps are silos - why? Making non-siloed apps is harder, and integration across these is harder still. Is it possible to design the underlying system/substrate such that siloed apps are not the structures grow easily, and integration is not something that has to be 'added on', but emerges automatically?
It offered some of these kinds of things. A contact card could be a file system object with metadata, and you could query the file system not just for them but for all contact cards that had a phone number or AIM handle in the metadata. So if applications like your mail client and chat clients are aware of this they can use one common data store and just pull info from it in a similar manner. This breaks down the silo between applications that might use the same data and/or files.
Another interesting idea is the data types in Amiga OS (http://www.mfischer.com/legacy/amosaic-paper/datatypes.html). I believe it lets applications be generic enough so that as new media formats are added, the applications automatically work with them, without rebuilding or restarting.
Lisp machines were mentioned in another thread here. They also had a better story around sharing data - you shared rich structures (rather than blobs of bytes that are encoded and decoded at every boundary) - and they would update live, etc.
In the end it seems today we're entrenched in something that's reasonable, but many good ideas could be explored more.
It’d be a clear advantage if they were all build to be systems, but it would be even better if they were build to be systems that coexist with each other.
I wish suppliers would ask us what hardware we operate, what services and API we’d want them to interact with, and, made plans for what we have to do when their system eventually dies. Because right now, everything is build for specific purposes, and it’s impossible to make most things work together.
So you’ll have one program that handles finances for sick people. Another that handles networking to companies and businesses that help get sick people back on track. Another program which keeps track on the wellbeing on families with sick members. A while range of medical programs. And none of them work together, run on the same hardware or has any form of uniform support solution, so we have to waste thousands of nan hours simply making sure every program has the data it needs.
It’s silly, but every developmenthouse seem to suck equally at making non silo sollutuons.
The problem is if it's difficult to get data into or out of them.
At work we make a fairly specialized application, but we're very flexible in how we can receive data and send data back out. From manually copying and pasting Excel sheets, parsing emails and PDFs, plain XML over SFTP or web services/REST APIs, we got solutions depending on where the core data comes from and where results needs to go (status back to WMS/TMS, financial details to invoicing system etc).
If needed we have customer-specific "integration shims" to easily adjust/correct/ignore data as it comes or goes.
This allows us to focus on making our application really good at a few things, and let other programs handle the tasks our customer also needs to get the job done (warehouse management, transport management, invoicing etc).
I agree though that support can be a bit of an issue when you have many systems working together and something is wrong. The place where the user discovers the error is often not the place where the error occurred. Monitoring is important, allowing support to be proactive ("hey, the server where you're hosting the DB to our program has less than 1GB diskspace left again").
So, what is a system? A system is a set of things—people, cells, molecules, or whatever—interconnected in such a way that they produce their own pattern of behavior over time. The system may be buffeted, constricted, triggered, or driven by outside forces. But the system’s response to these forces is characteristic of itself, and that response is seldom simple in the real world.
When it comes to Slinkies, this idea is easy enough to understand. When it comes to individuals, companies, cities, or economies, it can be heretical. The system, to a large extent, causes its own behavior! An outside event may unleash that behavior, but the same outside event applied to a different system is likely to produce a different result.
Think for a moment about the implications of that idea:
• Political leaders don’t cause recessions or economic booms. Ups and downs are inherent in the structure of the market economy.
• Competitors rarely cause a company to lose market share. They may be there to scoop up the advantage, but the losing company creates its losses at least in part through its own business policies.
• The oil-exporting nations are not solely responsible for oil- price rises. Their actions alone could not trigger global price rises and economic chaos if the oil consumption, pricing, and investment policies of the oil-importing nations had not built economies that are vulnerable to supply interruptions.
• The flu virus does not attack you; you set up the conditions for it to flourish within you.
• Drug addiction is not the failing of an individual and no one person, no matter how tough, no matter how loving, can cure a drug addict—not even the addict. It is only through understanding addiction as part of a larger set of influences and societal issues that one can begin to address it.
And an initial definition of a system on chapter 1 page 1:
A system is an interconnected set of elements that is coherently organized in a way that achieves something. If you look at that definition closely for a minute, you can see that a system must consist of three kinds of things: elements, interconnections, and a function or purpose.
edit oops, I misinterpreted what you meant by “some of those words”, didn’t take it literally... Will leave this up anyways I suppose, sorry!
Kind of reminds me of tuple spaces. Which is a neat idea. From the system vs program perspective, your system is running on this tuple space. Programs are individual actors capable of accessing the tuple space and triggered via some mechanism (like being notified if a certain type of data is posted or periodically executing a query and running if they receive a response). You can then grow the system by allowing tuples of different types to accumulate, and developing or extending actors in how they process this store.
Things can go very bad very quickly once a death spiral kicks in.
Sometimes we can deliberately induce emergent behavior (https://en.wikipedia.org/wiki/Automated_planning_and_schedul...) by composing our system correctly. To create behaviors and solutions that we didn't initially conceive of. But if unplanned emergent behavior shows up, your system has become unmaintainable until you've identified the causes.
Also, I think I deleted something extra in an edit of my original post. These planning algorithms (or some of them) involve creating options for your agent/AI and seeing what it does when interacting with its environment. You don't design the agent so that it will do a complex task, rather you give it simpler abilities (recognize an object type, pick up an object, set down an obect on another object, etc.) and the complexity arises precisely from emergent behavior. This was meant as an example of precisely where you do want emergent behavior and actively work to induce it.
A program is a set of commands. It can include state information and adaptability. A system is a system. A system, from a programming perspective, is something to be modeled and manipulated to a desired state. If you do this correctly, you're done. You don't need a degree in philosophy for this.
On the other hand, a lot of programming involves creating a stateless function that looks like a simple set of commands. That's it. Input in, output out, predictable behavior, zero adaptability, zero mutability.
System dynamics is the name of the field. I studied it outside a CS curriculum as a "high-level" topic, but really I took to it like a duck to water because any good coder, does this. Its the difference between a "green" coder and an "experienced" coder. And it shows up in several of the maths/fields. Anywhere there is complexity, nonlinear feedback, and long delays in feedback, you get "emergent" system behavior.
I try and tell people, you can't oversimplify things, you can only stuff the complexity somewhere else. And when you adopt a framework, its a set of tradeoffs (on where it shoves the complexity). Beware, frameworks that "hide" where they shove the complexity (because it will always bite you in the ass). Sort of like poker, if you can't spot the mark (complexity);)
I hate to tell you this, but most people doing hiring for coders, have NO IDEA about this level of coding. They're just looking for cogs. How many times have you heard, "the system is so complex, no single person at the company understands it?"... inevitably followed by a series out catastrophic failures that are largely, systemic hazards.
Thats why there is such a large "framework adoption" movemen in CS. Not because the companies can't dev the software inhouse; but because they don't understand what they are missing. The mismatch between the "coder" and the "architect" (or worse where they've fetishized the divorce between system requirements and implementation). The whole CI is about trying to close the gap using functional testing and small code updates. But it doesn't work; you can't test for what you don't know (demand-specific instability)... So now you have to instrument the entire codebase just to make it run (or take out the particularization even further ala "containers").
More than anything, this makes the difference in the quality of the coder/codebase. If you have a good grasp of how things change, rather than static targets; you don't have to go through the entire morass. You just code it right the first time.
The other thing that really matters, is domain modeling (separate from system dynamics, which is actually implementation/environment specific). Knowing your problem well, and being able to correctly pin the context boundaries, is super critical.
Last is some control theory; that is, how to auto-remediate the problems as they develop. Believe it or not, you don't need large amounts of instrumentation, in order to do this. Good solutions, automatically trade-off in direct sensing of changes in algo. performance/inputs. They are "ROBUST".