> Our approach to keeping software comprehensible is to reduce information hiding and abstraction, and instead encourage curiosity about internals
YES! Precisely. Most "bicycle for the mind" or "tool for thought" approaches go the completely opposite direction accidentally. Many programming languages and communities nowadays too.
In reality, if you wanna teach, you have to unveil the guts of the system. Sure, it's "scary" and all, but still, initial hand-holding & letting newcomers see the internals, rather than presenting an overly polished surface at the expensive of everything else, is a much better designed learning process. Design is "how it works" after all. *
Related: a common misconception when folks defend abstractions & encapsulation is to raise the issue that letting newcomers explore the internals causes too much trouble. That's an obvious strawman; these methods are not championing everyone writing anything everywhere. They can stay read-only, but public. E.g. an interface can expose its internal data types, without allowing consumers to actually leverage that (enforced through types or something else). This is easily doable, and you don't sacrifice learnability in the service of team programming sanity.
* This point seems to get dismissed quickly by folks who only care about visual polish nowadays. Imo this is naive. I care about visuals a _lot_, and we should definitely aiming for both learning polish and surface polish. But you can't let the latter calcify the former.
My advice for all the "tool for thought"/"bicycle for the mind" projects: all your opaque designs and small interface flourishes falter in the face of a properly exposed and clean-ed up paradigm, in the long run. This applies on a smaller scope to modern libraries too. That boilerplate generator you mega-fine-tuned to the point you delight yourself in guessing the user's mind, ultimately is a roadblock in the learner's thought process. Expose the boilerplate and document how you're automating it instead. You're trying to teach fishing, not to show off your own fish.
Another good analogy I've heard elsewhere is to ask whether you're teaching people how to cook, or just building a microwave. For a tool for thought product ask yourself if you ended up accidentally making "an appliance".
This brings up an interesting aside: they right way to build software for "agile" processes might look very different from the right way to build software for "frozen" (non-agile) processes (big corp, stable processes, few surprises if any, etc.). Ideas such as abstraction/encapsulation that work well for the latter situation might be totally out of place in the former.
So the fallacy in the argument is that implementation details are relevant to everyone.
They aren't. They're relevant to system builders and people who like tinkering because that is a separate domain to the user domain.
Most users just want an appliance, because a good appliance - like Hypercard, Excel, and maybe VBA - allows them to concentrate on the things that interest them, not on the things that interested the appliance designer when they were building their product.
This is fine for products in the the app store - it's a mark of good design that it reveals exactly as much complexity as it needs to, but no more - but it breaks down when the user domain and the tinkerer domain overlap, as they do in professional software development.
The reason it breaks down is because there is no general solution. No library is ever going to be a perfect snap-in tool that solves your problem for you, no language is going to hide OS-level and library-level abstractions with perfect elegance, and no OS is ever going to have perfectly elegant abstractions for file and network operations, simultaneous and synchronous operations, and so on.
Everyone has opinions about how software should be designed, the problems are essentially open-ended, tinkerers have a tendency to turn everything into an excuse for tinkering instead of a streamlined product/service pipeline, and the result is the mess we have today.
And this is probably how it has to be, given the limitations of human reason.
I can assure you that I'm not conflating simple things like user experience vs developer experience. That was the point of the asterisk.
I'm also very positive that the creator of Hypercard and Excel do _not_ think of their creation as appliance.
The whole point of the discussion around computer and computer languages/systems as tool for thought, is presuming that computing can be another form of mass literacy. Let's also not get into whether this should be the case (and my opinion here is less naive that one might think); I'm just saying that this is the debate. At the risk of exaggerating and drawing an imperfect parallel, the medieval monks also considered human language literacy as a "developer" thing, e.g. "reading the Bible is a separate domain to the everyday life of a consumer". And yes, they did make beautiful books as products/appliances. Again, I'm not saying programming languages will ever be useful to learn for the masses; that's another topic of discussion.
I agree very much that there is no general solution, and that tinkerers gotta tinker. Though this is orthogonal to the discussion (FYI I absolutely despise language tinkering/golfing these days; that's why I made sure to talk about this topic without talking about Mu earlier).
Exactly right. The Jobs line is a half-baked and perhaps half-understood take on the thinking of people like Kay, Papert, et al. Anyone curious about this line of thinking -- which has fallen way, way out of the popular computing discourse -- might want to check out Andrea diSessa's "Changing Minds" for a decent summary.
To bring it back to software, it would be great to separate the user experience from the developer experience if we knew how. But we don't currently know how to make a program do precisely what the user intends it to do but not something else like dial home or spam a bunch of people in Vermont or encrypt my hard disk to hold it for ransom. I propose we live in the world we find ourselves in rather than the one we wish we had. This world requires us to invest in understanding computers if we want to use them.
> The Jobs "bicycle for the mind" line was about the user experience, not the developer experience.
I used the phrase because the conference alluded to it, but I spent a while researching it and would rather not have. You're right that Jobs used it in a blunt, superficial way: https://www.youtube.com/watch?v=rTRzYjoZhIY. But he was also using the phrase to refer to technology from Xerox PARC that he did not fully appreciate. I'm glad the term has taken on a life of its own and grown to mean something closer to what PARC intended.
> the fallacy in the argument is that implementation details are relevant to everyone.
> They aren't. They're relevant to system builders and people who like tinkering because that is a separate domain to the user domain.
> [...] breaks down when the user domain and the tinkerer domain overlap, as they do in professional software development
Agree. The degree to which the two domains cross over, depends on, well, the domain. A driver doesn't have to know much about the workings of a car to pass their test, but as I understand it, pilots do have to have a solid understanding of the workings of their aircraft.
And I think that's actually all types of tinkering; technical or not. And for a lot of 'use' you want to tinker but want the technical part to work; which I agree with. But we still maybe want to keep them open, inviting and friendly somehow.
Just some thoughts along this rabbit hole I've been diving into lately--"game studies," "play," etc.-- https://publishup.uni-potsdam.de/frontdoor/index/index/docId... collects some references.
A good example would be that functions of the Std library that deal with the Filesystem often return an OsStr instead of the Str/String Rust uses elsewhere. This means you cannot ignore the fact that the stuff the OS gives you might not be valid UTF-8 bytes. All functions that can be interrupted from the outside or fail, return their values wrapped in a Result Type and make you handle it.
I think this is an example of good encapsulation — when something is a hard problem, it doesn't get easy suddenly, there are just clean ways of dealing with it. I learned a lot just by looking at how they did things and reading their explainations why they did it differently than other languages. In fact Rust teaches me so much about programming that it would've been worth to learn for these lessons alone..
This is the essence of why optional/result types lead to more stable systems than exceptions.
With exceptions they can come from anywhere, you don't know if you've handled all of them (depends on the language). It's goto with a new name.
With result types you're forced to deal with corner cases, or if you unwrap, you know you've omitted some corner cases, but it's still very visible what you did in your code.
With exceptions your code looks clean when it's dirty, with results and optionals your code looks dirty when it's dirty.
This is a talk about F#, but it explains the concept of railway oriented programming quite well. It's a paradigm for keeping your data flow clean with result types and optionals.
You can still teach people how things work or you can still have a list of potential implementations of an interface.
Where the internals matter you would either define very strict contracts or not use abstraction mechanisms that hide internals. But where it matters is not everywhere.
Rick: Listen, Morty. Just bburpecause you wrap something in an abstraction doesn't mean the complexity "goes away". It just means you've put off dealing with it -- usually until it fails. And when it does fail, you now have another layer of shit to dig through to find the root cause. Walk into the fire, Morty. Embrace the suck. And think about how you will handle error conditions before you write the happy path.
(I'm the author of OP.)
Quite coincidentally, the personas Microsoft used for their developer tools were called (in order of increasing experience) Mort, Elvis, and Einstein.
This leads to a subtle but crucial distinction between software “automating processes” without needing any humans in the loop -vs- amplifying the (expert) humans in the loop to do more stuff.
Another subtle (but arguably more important) effect comes from McLuhan’s insight that the medium strongly influences the message. If the medium/tool is not transparent, then so are the biases imposed on the tool user. While that doesn’t lead to obvious bugs, it leads to strutting blinders on human thought processes and creativity.
BTW, one last point, in case it isn’t obvious: this goal of transparency in software can be seen as underpinning the free software movement.
This sounds rather vague, "encourage curiosity about internals". If I don't need to understand the internals why should I spend time trying to understand them? Isn't it better to "hide their details" into another compilation unit or module, which only exposes its API through an interface so I can easily understand WHAT it is doing, without having to understand HOW it does that?
It sounds like they suggest instead of sub-routines/procedures/functions I should have a single big main program, because those sub-routines/procedures/functions would be hiding information from me.
I just propose to not maintain 3 levels of functions where one will suffice, just to preserve some "interface" for "backwards compatibility".
You aren't forced to understand internals. But it is an explicit design goal to make the internals comprehensible against the day you want to learn about them.
But yes, I'm proposing to rely a lot less on compatibility going forward. A big reason for building a new stack rather than building atop existing interfaces is precisely to avoid backwards compatibility. A big reason to keep the codebase comprehensible is to shift the social contract, so that people can fend for themselves in the face of incompatible changes. By the same token, a big reason to avoid compatibility is so that we don't have to indefinitely make the task of future readers more difficult.
I think the age of computers is just starting. 99.999% of computer use is yet to happen. It seems shortsighted to hamper the future for the sake of the miniscule past. You wouldn't want to force the adult to hew to the short-sighted decisions of the toddler.
> It seems shortsighted to hamper the future for the sake of the miniscule past.
These too statements seem somewhat contradictory to me.
If we don't provide means for making systems where it is easy to validate their backwards compatibility then it becomes difficult to EVOLVE the system incrementally. That would seem to me to "hamper the future".
This is how biological evolution became a great success, not by "starting from scratch" every time but by making sure that new individuals were always more or less "backwards compatible".
Yes sometimes it makes sense to rewrite something from scratch but forsaking the idea of reusability, which is very closely related to the idea of "compatibility", just seems to me as hampering the future.
Not preparing for the future by not making sure you can easily ensure and validate backwards compatibility, is "hampering the future", in my view. :-)
You're right that there's a tension between present and future. But that's always true. My statements aren't contradictory in isolation, they just reflect this underlying tension. You have to navigate it no matter what point you choose on the spectrum.
> If we don't provide means for making systems where it is easy to validate their backwards compatibility...
This begs the question of why you need to validate backwards compatibility. What benefits do you get? If we have a list of benefits we can start working through them and figure out if something else may substitute for compatibility.
> This is how biological evolution became a great success, not by "starting from scratch" every time but by making sure that new individuals were always more or less "backwards compatible".
This analogy isn't helpful, because evolution is constantly testing lots of different approaches all at once, extremophiles and bacteria and cheetahs. While many lessons are shared between simple and complex life forms, there is also a lot of branching where different organisms come up with disjoint solutions to the same problems. Human supply chains are not nearly so redundant. Lessons from evolution don't always apply to something that's 'intelligently' designed.
> ..the idea of reusability, which is very closely related to the idea of "compatibility"..
It's worth separating these two ideas. You can have reuse without compatibility (reworking an existing codebase to fit an entirely new interface) and compatibility without reuse (cleanroom implementations of protocols).
Reuse is a difficult word to wrap my head around because it encompasses so much:
a) Calling a function at a new call-site.
b) Selecting a library for a use case it was designed for.
c) Mangling a library to support a use case it was not designed for.
d) Deploying copies of an AMI.
e) ... and much more
I'm very supportive of post-deployment reuse (e.g. d above) which is usually entirely automated. But I'm suspicious of large-scale reuse during the development process. We've been trying to make it work for decades now, and it just hasn't panned out. At what point do we reframe a goal as a delusion? I think we're way past it in this case.
Software Reuse is not a "goal" but common well-understood practice. Call a function from more than one location, instantiate a class more than once and call its methods from several locations. We do that because it is helpful for development and maintenance, not because that would be our "goal".
I would think most software written say in Java uses that approach, in practice. Create multiple instances of a given class. So what would be the evidence that this approach has not "panned out" and is "delusional goal"?
Maybe your "avoid reuse" is a better approach. But I don't think there's a lot of evidence that would prove it to be so, not at least yet.
If you're copy-pasting code everywhere rather than creating a function, that's obviously terrible. But on the other hand we've all also seen functions with twelve arguments and 3 boolean flags that are trying to do too many things at once. Is there any "practice" that gives us good guidelines on at what point to fork a function into two? I haven't heard of any such. The fact that our "best practices" are one-sided is a strong sign that they're more religion than science. Science reasons about trade-offs.
Software isn't yet much of an evidence-based science. There's just too many variables. We don't know how to reason about the interactions between them.
Here are some writings on the dangers of over-abstraction. At the very least they're evidence that I'm not alone in these suspicions.
Yes they (often) "suck" but many of them work as well, produce widely used software.
I would not call subclassing a "best practice" but a widely used form of reuse. It is a useful feature to have in your toolbox, in my experience, having programmed in Java and other OOP languages.
I think it boils down to "There is no Silver Bullet". Reuse is not a Silver Bullet. But it is "a" bullet. :-)
I'm not claiming functions are bad, or classes are bad, or inheritance is bad (there is evidence of the last, but it's orthogonal to this discussion). I'm claiming freezing interfaces for compatibility guarantees is often a bad idea. There needs to be far broader awareness of what you give up when you freeze an interface and commit to support it indefinitely.
It's about minimizing zones of ownership within a single computer. Right now our computers have thousands of zones of ownership, and it's easy for important use cases and bugs to fall through the cracks between zones of ownership. Freezing an interface fractures what used to be a single zone of ownership into two.
E.g. the primary goal of DTD was to validate XML against a simple grammar, but the same DTD can be used for auto-completion when editing the XML. Same data, different usage. Also, if we want to shift to a more powerful validation engine, e.g. XSD schema, we can parse DTDs and mechanically convert them into equivalent XSDs.
Now my question is does this learnability hold as the code grows? For example, would the transparency enabled by the stack be reflected in the codebase for a word processor built on this stack? I'd expect not. For one, I'm thinking about how the original program in my Marilyn Maloney demo (http://glench.com/MarilynMaloney/) had good primitives, grammar, and a visual representation, yet the code is still hard to understand and can be made much easier to understand with depicting behavior (the interactive UI I made).
Secondly, I'm thinking of the Alan Kay-ism "architecture dominates materials" i.e. the invention of the arch was much more effective than the invention of the brick. Mu seems like it could be a great, transparent material, but more important to how software is understood and modified is the architecture of how the material is organized (and personally I think this has a lot to do with UI design and human communication more than the technical foundation).
We'll have to see how higher levels of the stack develop. I started on this quest building a Lisp, so believe me when I say I can't wait to get to the high-level side of things. There doesn't seem any reason to imagine that we can't have good primitives, grammar and visual representation. My claim is merely that in addition to these interface properties the implementation for them is also important. And it gets short shrift in the conventional narrative.
That said, let's imagine that there's a strong case that we need to give up parser generators (or something similarly high-level) in order to give everybody the ability to understand their computers. I'd take that trade in a heartbeat.
But in the end, your BS radar is your own.
It's generally better to read hard things. They're hard because they have a lot of information density or a lot of gaps. But that also makes people confuse poor readability for quality. Something that has no content tries to be hard to read by stuffing it full of gobbledygook. Whereas something that is trying to communicate an idea often goes through a few revisions trying to simplify the idea. If you have great ideas, you want the user to be expending their brainpower trying to comprehend the idea, not translating it into simple English. (exception is using jargon that's common to people at the right level)
Did you see this recent thread? It is a paper on how to read papers and it was very useful and insightful for me.
From the linked pdf:
"Researchers must read papers for several reasons: to review them for a conference or a class, to keep current in their field, or for a literature survey of a new field. A typical researcher will likely spend hundreds of hours every year reading papers.
Learning to efficiently read a paper is a critical but rarely taught skill. Beginning graduate students, therefore, must learn on their own using trial and error. Students waste much effort in the process and are frequently driven to frustration.
For many years I have used a simple ‘three-pass’ approach to prevent me from drowning in the details of a paper before getting a bird’s-eye-view. It allows me to estimate the amount of time required to review a set of papers. Moreover, I can adjust the depth of paper evaluation depending on my needs and how much time I have. This paper describes the approach and its use in doing a literature survey."
One of the major effects in software right now is that we pour such a staggering amount of effort into our current forms and patterns that it can create an overwhelming barrier to "better ideas", because there's just no way that even a much better idea can come out of the gate and compete against the behemoth of all the work done with the current paradigms. So we're stuck with patterns written into our systems from 40-50 years ago that can only very slowly change.
As a more concrete example, while my criticism of some things like "visual programming" as "the brand new paradigm that is going to revolutionize programming because nobody has thought of it before" has gotten down to the point I can virtually just copy & paste them to almost anyone who tries them (in the spirit of ), the flip side of my polished criticism is that comparing software platforms with literally billions of person-hours poured into them to some guy's hobby project, itself by necessity mostly built on the platform that embodies the wrong paradigm and tends to force itself on to the new project in various ways if it want to come out of the gate able to deal with real problems like "parsing XML" or whathaveyou, isn't necessarily a fair representation of what the project could be if it also had billions of hours poured into it. Unfortunately, as is the way of these things, if it's going to "revolutionize programming" it's got to actually be able to beat the current systems in some useful manner, fairly conclusively, right out of the gate.
As another example, no points for guessing that in a hundred years, computer CPUs aren't going to look much like a modern CPU. Perhaps there won't even be a "Central" processing unit anymore. But I promise you the current architecture will probably hang on longer than you'd guess (I wanted to say "50 years" and honestly got a bit gunshy about that... for all their orders of magnitude more transistors modern CPUs are clearly related to what we had 50 years ago today!). It's going to be hard for that new paradigm to compete with something with so much effort poured into it.
But I think this push for "stimulating curiosity" is academic wank. That's not what bicycles of the mind are for - they're tools for building, not naval gazing. It sounds like Mu was built more for ego or prestige than to actually do anything meaningful.
Show me Mu as a bike I want to ride - not as a philosophical introspection on why Mu is in some way superior. Reading this PDF it's clearly more ideal than practical. And while it may be "see through" it's shape, interfaces, interactions and surface area are convoluted and poorly presented.
Maybe I am wrong to expect humanities out of this, but humans are who have to use this stuff/ride these bikes.