
Bicycles for the mind have to be see-through [pdf] - mkeeter
http://akkartik.name/akkartik-convivial-20200315.pdf
======
chenglou
No comment about the Mu itself but:

> Our approach to keeping software comprehensible is to reduce information
> hiding and abstraction, and instead encourage curiosity about internals

YES! Precisely. Most "bicycle for the mind" or "tool for thought" approaches
go the completely opposite direction accidentally. Many programming languages
and communities nowadays too.

In reality, if you wanna teach, you have to unveil the guts of the system.
Sure, it's "scary" and all, but still, initial hand-holding & letting
newcomers see the internals, rather than presenting an overly polished surface
at the expensive of everything else, is a much better designed learning
process. Design is "how it works" after all. *

Related: a common misconception when folks defend abstractions & encapsulation
is to raise the issue that letting newcomers explore the internals causes too
much trouble. That's an obvious strawman; these methods are not championing
everyone writing anything everywhere. They can stay read-only, but public.
E.g. an interface can expose its internal data types, without allowing
consumers to actually leverage that (enforced through types or something
else). This is easily doable, and you don't sacrifice learnability in the
service of team programming sanity.

* This point seems to get dismissed quickly by folks who only care about visual polish nowadays. Imo this is naive. I care about visuals a _lot_, and we should definitely aiming for both learning polish and surface polish. But you can't let the latter calcify the former.

My advice for all the "tool for thought"/"bicycle for the mind" projects: all
your opaque designs and small interface flourishes falter in the face of a
properly exposed and clean-ed up paradigm, in the long run. This applies on a
smaller scope to modern libraries too. That boilerplate generator you mega-
fine-tuned to the point you delight yourself in guessing the user's mind,
ultimately is a roadblock in the learner's thought process. Expose the
boilerplate and document how you're automating it instead. You're trying to
teach fishing, not to show off your own fish.

Another good analogy I've heard elsewhere is to ask whether you're teaching
people how to cook, or just building a microwave. For a tool for thought
product ask yourself if you ended up accidentally making "an appliance".

~~~
TheOtherHobbes
I think it's more like confusing people who just want to cook by expecting
them to learn hob design and maintenance. The Jobs "bicycle for the mind" line
was about the user experience, not the developer experience.

So the fallacy in the argument is that implementation details are relevant to
everyone.

They aren't. They're relevant to system builders and people who like tinkering
_because that is a separate domain to the user domain._

Most users just want an appliance, because a good appliance - like Hypercard,
Excel, and maybe VBA - allows them to concentrate on the things that interest
them, not on the things that interested the appliance designer when they were
building their product.

This is fine for products in the the app store - it's a mark of good design
that it reveals exactly as much complexity as it needs to, but no more - but
it breaks down when the user domain and the tinkerer domain overlap, as they
do in professional software development.

The reason it breaks down is because there is no general solution. No library
is ever going to be a perfect snap-in tool that solves your problem for you,
no language is going to hide OS-level and library-level abstractions with
perfect elegance, and no OS is ever going to have perfectly elegant
abstractions for file and network operations, simultaneous and synchronous
operations, and so on.

Everyone has opinions about how software should be designed, the problems are
essentially open-ended, tinkerers have a tendency to turn everything into an
excuse for tinkering instead of a streamlined product/service pipeline, and
the result is the mess we have today.

And this is probably how it has to be, given the limitations of human reason.

~~~
chenglou
Argh, I promised myself I wouldn't come back to this thread actually; debates
around this particular subject I've been part of have been tiresome =)

I can assure you that I'm not conflating simple things like user experience vs
developer experience. That was the point of the asterisk.

I'm also very positive that the creator of Hypercard and Excel do _not_ think
of their creation as appliance.

The whole point of the discussion around computer and computer
languages/systems as tool for thought, is presuming that computing can be
another form of mass literacy. Let's also not get into whether this should be
the case (and my opinion here is less naive that one might think); I'm just
saying that this is the debate. At the risk of exaggerating and drawing an
imperfect parallel, the medieval monks also considered human language literacy
as a "developer" thing, e.g. "reading the Bible is a separate domain to the
everyday life of a consumer". And yes, they did make beautiful books as
products/appliances. Again, I'm not saying programming languages will ever be
useful to learn for the masses; that's another topic of discussion.

I agree very much that there is no general solution, and that tinkerers gotta
tinker. Though this is orthogonal to the discussion (FYI I absolutely despise
language tinkering/golfing these days; that's why I made sure to talk about
this topic without talking about Mu earlier).

~~~
scroot
> The whole point of the discussion around computer and computer
> languages/systems as tool for thought, is presuming that computing can be
> another form of mass literacy. Let's also not get into whether this should
> be the case (and my opinion here is less naive that one might think); I'm
> just saying that this is the debate.

Exactly right. The Jobs line is a half-baked and perhaps half-understood take
on the thinking of people like Kay, Papert, et al. Anyone curious about this
line of thinking -- which has fallen way, way out of the popular computing
discourse -- might want to check out Andrea diSessa's "Changing Minds" for a
decent summary.

------
ssivark
I know very little about “mu” specifically, but the title is a highly under-
appreciated point. I often notice a tendency to abstract complexity in clean
looking interfaces. While that’s convenient for simple projects with pre-
envisioned use cases, when using the tool in some novel way often results in
bugs due to the abstraction being leaky. For that reason, I personally prefer
designs that are transparent, rather than one that attempts to paint a pretty
exterior on top of internal complexity. (Obviously correct, rather than not
obviously wrong)

This leads to a subtle but crucial distinction between software “automating
processes” without needing any humans in the loop -vs- amplifying the (expert)
humans in the loop to do more stuff.

Another subtle (but arguably more important) effect comes from McLuhan’s
insight that the medium strongly influences the message. If the medium/tool is
not transparent, then so are the biases imposed on the tool user. While that
doesn’t lead to obvious bugs, it leads to strutting blinders on human thought
processes and creativity.

BTW, one last point, in case it isn’t obvious: this goal of transparency in
software can be seen as underpinning the free software movement.

~~~
drivers99
I’m looking forward to reading this paper. I feel like I’ve found my kind of
people. I’ve had this half baked idea about how people should be able to see
what’s going on in their computer, and trying to picture what that would look
like. At the simplest level it’s the difference between seeing a trace of what
the computer is doing when it boots up vs being shown a pretty screen which
freezes up. Another is just the way so many software products are like “let us
worry about the details”. I want to be able see everything from the bytes in
memory to what’s going on in every level of the system and want a user
interface that lets people do that.

~~~
thulecitizen
Check out the work of the MetaCurrency Project

[http://MetaCurrency.org](http://MetaCurrency.org)

------
galaxyLogic
"Our approach to keeping software comprehensible is to reduce information
hiding and abstraction, and instead encourage curiosity about internals."

This sounds rather vague, "encourage curiosity about internals". If I don't
need to understand the internals why should I spend time trying to understand
them? Isn't it better to "hide their details" into another compilation unit or
module, which only exposes its API through an interface so I can easily
understand WHAT it is doing, without having to understand HOW it does that?

It sounds like they suggest instead of sub-routines/procedures/functions I
should have a single big main program, because those sub-
routines/procedures/functions would be hiding information from me.

~~~
akkartik
No, I use functions:
[https://github.com/akkartik/mu#readme](https://github.com/akkartik/mu#readme)

I just propose to not maintain 3 levels of functions where one will suffice,
just to preserve some "interface" for "backwards compatibility".

You aren't forced to understand internals. But it is an explicit design goal
to make the internals comprehensible against the day you want to learn about
them.

~~~
j88439h84
Are you proposing to break backward compatibility?

~~~
akkartik
Well, it's a new computer so there's no notion of compatibility.

But yes, I'm proposing to rely a lot less on compatibility going forward. A
big reason for building a new stack rather than building atop existing
interfaces is precisely to avoid backwards compatibility. A big reason to keep
the codebase comprehensible is to shift the social contract, so that people
can fend for themselves in the face of incompatible changes. By the same
token, a big reason to avoid compatibility is so that we don't have to
indefinitely make the task of future readers more difficult.

I think the age of computers is just starting. 99.999% of computer use is yet
to happen. It seems shortsighted to hamper the future for the sake of the
miniscule past. You wouldn't want to force the adult to hew to the short-
sighted decisions of the toddler.

~~~
galaxyLogic
> Well, it's a new computer so there's no notion of compatibility.

> It seems shortsighted to hamper the future for the sake of the miniscule
> past.

These too statements seem somewhat contradictory to me. If we don't provide
means for making systems where it is easy to validate their backwards
compatibility then it becomes difficult to EVOLVE the system incrementally.
That would seem to me to "hamper the future".

This is how biological evolution became a great success, not by "starting from
scratch" every time but by making sure that new individuals were always more
or less "backwards compatible".

Yes sometimes it makes sense to rewrite something from scratch but forsaking
the idea of reusability, which is very closely related to the idea of
"compatibility", just seems to me as hampering the future.

Not preparing for the future by not making sure you can easily ensure and
validate backwards compatibility, is "hampering the future", in my view. :-)

~~~
akkartik
We'll have to agree to disagree.

You're right that there's a tension between present and future. But that's
always true. My statements aren't contradictory in isolation, they just
reflect this underlying tension. You have to navigate it no matter what point
you choose on the spectrum.

> If we don't provide means for making systems where it is easy to validate
> their backwards compatibility...

This begs the question of _why_ you need to validate backwards compatibility.
What benefits do you get? If we have a list of benefits we can start working
through them and figure out if something else may substitute for
compatibility.

> This is how biological evolution became a great success, not by "starting
> from scratch" every time but by making sure that new individuals were always
> more or less "backwards compatible".

This analogy isn't helpful, because evolution is constantly testing lots of
different approaches all at once, extremophiles and bacteria and cheetahs.
While many lessons are shared between simple and complex life forms, there is
also a lot of branching where different organisms come up with disjoint
solutions to the same problems. Human supply chains are not nearly so
redundant. Lessons from evolution don't always apply to something that's
'intelligently' designed.

> ..the idea of reusability, which is very closely related to the idea of
> "compatibility"..

It's worth separating these two ideas. You can have reuse without
compatibility (reworking an existing codebase to fit an entirely new
interface) and compatibility without reuse (cleanroom implementations of
protocols).

Reuse is a difficult word to wrap my head around because it encompasses so
much:

a) Calling a function at a new call-site.

b) Selecting a library for a use case it was designed for.

c) Mangling a library to support a use case it was not designed for.

d) Deploying copies of an AMI.

e) ... and much more

I'm very supportive of post-deployment reuse (e.g. d above) which is usually
entirely automated. But I'm suspicious of large-scale reuse during the
development process. We've been trying to make it work for decades now, and it
just hasn't panned out. At what point do we reframe a goal as a delusion? I
think we're way past it in this case.

~~~
galaxyLogic
> But I'm suspicious of large-scale reuse during the development process.
> We've been trying to make it work for decades now, and it just hasn't panned
> out. At what point do we reframe a goal as a delusion? I think we're way
> past it in this case.

Software Reuse is not a "goal" but common well-understood practice. Call a
function from more than one location, instantiate a class more than once and
call its methods from several locations. We do that because it is helpful for
development and maintenance, not because that would be our "goal".

I would think most software written say in Java uses that approach, in
practice. Create multiple instances of a given class. So what would be the
evidence that this approach has not "panned out" and is "delusional goal"?

Maybe your "avoid reuse" is a better approach. But I don't think there's a lot
of evidence that would prove it to be so, not at least yet.

~~~
akkartik
I'd argue there isn't a lot of evidence either way. What you call "common
well-understood practice" I strongly suspect is just cargo culting. As
evidence: it doesn't seem to make any projects turn out better. All projects
eventually suck over time, no matter how well they follow "best practices".

If you're copy-pasting code everywhere rather than creating a function, that's
obviously terrible. But on the other hand we've all also seen functions with
twelve arguments and 3 boolean flags that are trying to do too many things at
once. Is there any "practice" that gives us good guidelines on at what point
to fork a function into two? I haven't heard of any such. The fact that our
"best practices" are one-sided is a strong sign that they're more religion
than science. Science reasons about trade-offs.

Software isn't yet much of an evidence-based science. There's just too many
variables. We don't know how to reason about the interactions between them.

Here are some writings on the dangers of over-abstraction. At the very least
they're evidence that I'm not alone in these suspicions.

[http://www.sandimetz.com/blog/2016/1/20/the-wrong-
abstractio...](http://www.sandimetz.com/blog/2016/1/20/the-wrong-abstraction)

[http://programmingisterrible.com/post/139222674273/write-
cod...](http://programmingisterrible.com/post/139222674273/write-code-that-is-
easy-to-delete-not-easy-to)

[http://bravenewgeek.com/abstraction-considered-
harmful](http://bravenewgeek.com/abstraction-considered-harmful)

[http://dimitri-on-software-
development.blogspot.de/2009/12/d...](http://dimitri-on-software-
development.blogspot.de/2009/12/disadvantages-of-code-reuse.html)

~~~
galaxyLogic
> All projects eventually suck over time, no matter how well they follow "best
> practices".

Yes they (often) "suck" but many of them work as well, produce widely used
software.

I would not call subclassing a "best practice" but a widely used form of
reuse. It is a useful feature to have in your toolbox, in my experience,
having programmed in Java and other OOP languages.

I think it boils down to "There is no Silver Bullet". Reuse is not a Silver
Bullet. But it is "a" bullet. :-)

~~~
akkartik
If we've gone from thinking of it as essential to just "a" bullet, I'm happy.
Now we can have a productive conversation about what other bullets this bullet
displaces in your toolbox, and whether that's worth it.

I'm not claiming functions are bad, or classes are bad, or inheritance is bad
(there is evidence of the last, but it's orthogonal to this discussion). I'm
claiming freezing interfaces for compatibility guarantees is often a bad idea.
There needs to be far broader awareness of what you give up when you freeze an
interface and commit to support it indefinitely.

It's about minimizing zones of ownership within a single computer. Right now
our computers have thousands of zones of ownership, and it's easy for
important use cases and bugs to fall through the cracks between zones of
ownership. Freezing an interface fractures what used to be a single zone of
ownership into two.

------
h0l0cube
The paper name-drops Ivan Illich and "Tools for Conviviality". If you are
intrigued by his ideas, you might be interested in the blog by L. M. Sacasas,
The Convivial Society. His writing holds plenty of insight on the intersection
of technology and society.

[https://theconvivialsociety.substack.com/](https://theconvivialsociety.substack.com/)

------
Glench
I love the effort to make the stack fully transparent and learnable. This
seems so important! I wish more people had the ambition to re-invent the
basics of our computing platform to see how things could be different.

Now my question is does this learnability hold as the code grows? For example,
would the transparency enabled by the stack be reflected in the codebase for a
word processor built on this stack? I'd expect _not_. For one, I'm thinking
about how the original program in my Marilyn Maloney demo
([http://glench.com/MarilynMaloney/](http://glench.com/MarilynMaloney/)) had
good primitives, grammar, and a visual representation, yet the code is still
hard to understand and can be made much easier to understand with depicting
behavior (the interactive UI I made).

Secondly, I'm thinking of the Alan Kay-ism "architecture dominates materials"
i.e. the invention of the arch was much more effective than the invention of
the brick. Mu seems like it could be a great, transparent material, but more
important to how software is understood and modified is the architecture of
how the material is organized (and personally I think this has a lot to do
with UI design and human communication more than the technical foundation).

~~~
akkartik
These are good questions! It may well be that there's an architectural
breakthrough out there that will obsolete this whole approach. It's one of
very many ways it can fail. Think of it as a hedge, at a societal level. What
if we aren't able to come up with a technological breakthrough? After all,
software has lagged Moore's law for decades now. I think there's a very real
danger in the next few decades that software will get regulated like other
fields before it, killing the intrinsically motivated aspect of it entirely.
That would be a huge loss to society.

We'll have to see how higher levels of the stack develop. I started on this
quest building a Lisp, so believe me when I say I can't wait to get to the
high-level side of things. There doesn't seem any reason to imagine that we
can't have good primitives, grammar and visual representation. My claim is
merely that in addition to these interface properties the implementation for
them is also important. And it gets short shrift in the conventional
narrative.

That said, let's imagine that there's a strong case that we need to give up
parser generators (or something similarly high-level) in order to give
everybody the ability to understand their computers. I'd take that trade in a
heartbeat.

------
muzani
I'd like to read this, but it's triggering my BS radar by the amount of
gobbledygook and sentences like "Creating an entire new stack may seem like
tilting at windmills, but the mainstream Software-Industrial Complex suffers
from obvious defects even in the eyes of those who don’t share our
philosophy."

~~~
akkartik
I wrote it for an academic conference ([https://2020.programming-
conference.org/home/salon-2020](https://2020.programming-
conference.org/home/salon-2020)) which had as its theme an essay by Ivan
Illich ([http://akkartik.name/illich.pdf](http://akkartik.name/illich.pdf),
pdf, 67 pages)

But in the end, your BS radar is your own.

~~~
muzani
That's fair. But academia is often full of BS, pardon the language, and so the
radar needs to be cranked up. With, say, Medium articles, one can skim if it's
good or not within seconds. Academic papers can take hours, if not weeks to
discover if it's complete BS or whether I'm just not smart enough, and so I
can only go off minor cues.

It's generally better to read hard things. They're hard because they have a
lot of information density or a lot of gaps. But that also makes people
confuse poor readability for quality. Something that has no content tries to
be hard to read by stuffing it full of gobbledygook. Whereas something that is
trying to communicate an idea often goes through a few revisions trying to
simplify the idea. If you have great ideas, you want the user to be expending
their brainpower trying to comprehend the idea, not translating it into simple
English. (exception is using jargon that's common to people at the right
level)

~~~
thulecitizen
> Academic papers can take hours, if not weeks to discover if it's complete BS

Did you see this recent thread? It is a paper on how to read papers and it was
very useful and insightful for me.

[https://news.ycombinator.com/item?id=21979350](https://news.ycombinator.com/item?id=21979350)

From the linked pdf:

"Researchers must read papers for several reasons: to review them for a
conference or a class, to keep current in their field, or for a literature
survey of a new field. A typical researcher will likely spend hundreds of
hours every year reading papers.

Learning to efficiently read a paper is a critical but rarely taught skill.
Beginning graduate students, therefore, must learn on their own using trial
and error. Students waste much effort in the process and are frequently driven
to frustration.

For many years I have used a simple ‘three-pass’ approach to prevent me from
drowning in the details of a paper before getting a bird’s-eye-view. It allows
me to estimate the amount of time required to review a set of papers.
Moreover, I can adjust the depth of paper evaluation depending on my needs and
how much time I have. This paper describes the approach and its use in doing a
literature survey."

------
Zenst
I really wished such papers had double spacing, or better still, the ability
of adobe reader to allow you to adjust such spacing - that would be an amazing
feature as for some layouts, they are easier to read with better spacing
between the lines - at least for me.

------
meesterdude
My bicycle has been Ruby on Rails for over 10 years and i think it is hard to
beat. most all of the underlying framework is easily inspectable and
changeable to suit needs, while still providing sharp knives that one could
use to cut themselves if not careful.

But I think this push for "stimulating curiosity" is academic wank. That's not
what bicycles of the mind are for - they're tools for building, not naval
gazing. It sounds like Mu was built more for ego or prestige than to actually
do anything meaningful.

Show me Mu as a bike I want to ride - not as a philosophical introspection on
why Mu is in some way superior. Reading this PDF it's clearly more ideal than
practical. And while it may be "see through" it's shape, interfaces,
interactions and surface area are convoluted and poorly presented.

Maybe I am wrong to expect humanities out of this, but humans are who have to
use this stuff/ride these bikes.

