Hacker News new | past | comments | ask | show | jobs | submit login
Bicycles for the mind have to be see-through [pdf] (akkartik.name)
210 points by mkeeter 13 days ago | hide | past | web | favorite | 43 comments





No comment about the Mu itself but:

> Our approach to keeping software comprehensible is to reduce information hiding and abstraction, and instead encourage curiosity about internals

YES! Precisely. Most "bicycle for the mind" or "tool for thought" approaches go the completely opposite direction accidentally. Many programming languages and communities nowadays too.

In reality, if you wanna teach, you have to unveil the guts of the system. Sure, it's "scary" and all, but still, initial hand-holding & letting newcomers see the internals, rather than presenting an overly polished surface at the expensive of everything else, is a much better designed learning process. Design is "how it works" after all. *

Related: a common misconception when folks defend abstractions & encapsulation is to raise the issue that letting newcomers explore the internals causes too much trouble. That's an obvious strawman; these methods are not championing everyone writing anything everywhere. They can stay read-only, but public. E.g. an interface can expose its internal data types, without allowing consumers to actually leverage that (enforced through types or something else). This is easily doable, and you don't sacrifice learnability in the service of team programming sanity.

* This point seems to get dismissed quickly by folks who only care about visual polish nowadays. Imo this is naive. I care about visuals a _lot_, and we should definitely aiming for both learning polish and surface polish. But you can't let the latter calcify the former.

My advice for all the "tool for thought"/"bicycle for the mind" projects: all your opaque designs and small interface flourishes falter in the face of a properly exposed and clean-ed up paradigm, in the long run. This applies on a smaller scope to modern libraries too. That boilerplate generator you mega-fine-tuned to the point you delight yourself in guessing the user's mind, ultimately is a roadblock in the learner's thought process. Expose the boilerplate and document how you're automating it instead. You're trying to teach fishing, not to show off your own fish.

Another good analogy I've heard elsewhere is to ask whether you're teaching people how to cook, or just building a microwave. For a tool for thought product ask yourself if you ended up accidentally making "an appliance".


The original idea of encapsulation was not to rely on arbitrary internal knowledge but provide stable procedures. One of real-world examples was not to rely on Dorothy in marketing putting the red copy into the blue folder because next month Dorothy may be on vacation, red copies become yellow, and the blue folder is replaced with a green one. Instead the procedure should refer to Senior Marketing assistant, marketing copy, and folder for the current year. No excessive hiding, just stability.

Yes, but what if that appearance of stability is unwarranted? After all, the underlying process is just Dorothy moving things around, and when Dorothy's on vacation things might not be working the way they should.

This brings up an interesting aside: they right way to build software for "agile" processes might look very different from the right way to build software for "frozen" (non-agile) processes (big corp, stable processes, few surprises if any, etc.). Ideas such as abstraction/encapsulation that work well for the latter situation might be totally out of place in the former.


I think it's more like confusing people who just want to cook by expecting them to learn hob design and maintenance. The Jobs "bicycle for the mind" line was about the user experience, not the developer experience.

So the fallacy in the argument is that implementation details are relevant to everyone.

They aren't. They're relevant to system builders and people who like tinkering because that is a separate domain to the user domain.

Most users just want an appliance, because a good appliance - like Hypercard, Excel, and maybe VBA - allows them to concentrate on the things that interest them, not on the things that interested the appliance designer when they were building their product.

This is fine for products in the the app store - it's a mark of good design that it reveals exactly as much complexity as it needs to, but no more - but it breaks down when the user domain and the tinkerer domain overlap, as they do in professional software development.

The reason it breaks down is because there is no general solution. No library is ever going to be a perfect snap-in tool that solves your problem for you, no language is going to hide OS-level and library-level abstractions with perfect elegance, and no OS is ever going to have perfectly elegant abstractions for file and network operations, simultaneous and synchronous operations, and so on.

Everyone has opinions about how software should be designed, the problems are essentially open-ended, tinkerers have a tendency to turn everything into an excuse for tinkering instead of a streamlined product/service pipeline, and the result is the mess we have today.

And this is probably how it has to be, given the limitations of human reason.


Argh, I promised myself I wouldn't come back to this thread actually; debates around this particular subject I've been part of have been tiresome =)

I can assure you that I'm not conflating simple things like user experience vs developer experience. That was the point of the asterisk.

I'm also very positive that the creator of Hypercard and Excel do _not_ think of their creation as appliance.

The whole point of the discussion around computer and computer languages/systems as tool for thought, is presuming that computing can be another form of mass literacy. Let's also not get into whether this should be the case (and my opinion here is less naive that one might think); I'm just saying that this is the debate. At the risk of exaggerating and drawing an imperfect parallel, the medieval monks also considered human language literacy as a "developer" thing, e.g. "reading the Bible is a separate domain to the everyday life of a consumer". And yes, they did make beautiful books as products/appliances. Again, I'm not saying programming languages will ever be useful to learn for the masses; that's another topic of discussion.

I agree very much that there is no general solution, and that tinkerers gotta tinker. Though this is orthogonal to the discussion (FYI I absolutely despise language tinkering/golfing these days; that's why I made sure to talk about this topic without talking about Mu earlier).


> The whole point of the discussion around computer and computer languages/systems as tool for thought, is presuming that computing can be another form of mass literacy. Let's also not get into whether this should be the case (and my opinion here is less naive that one might think); I'm just saying that this is the debate.

Exactly right. The Jobs line is a half-baked and perhaps half-understood take on the thinking of people like Kay, Papert, et al. Anyone curious about this line of thinking -- which has fallen way, way out of the popular computing discourse -- might want to check out Andrea diSessa's "Changing Minds" for a decent summary.


(Author of OP here.) I don't think computers are like appliances at all. Given the outsized effect that software infrastructure has on our lives, I think computers are more like laws. And my lesson from (a layman's reading of) Ancient Rome and the last 300 years of Western Civilization is that laws are too important to be left to legislators. This is a failure on both sides. We the masses have wanted to treat the hard work of law-making as an externality, and legislators have been happy to take control of them for us because it serves their interest. Both sides think they're pulling one over on the other, and both are being short-sighted.

To bring it back to software, it would be great to separate the user experience from the developer experience if we knew how. But we don't currently know how to make a program do precisely what the user intends it to do but not something else like dial home or spam a bunch of people in Vermont or encrypt my hard disk to hold it for ransom. I propose we live in the world we find ourselves in rather than the one we wish we had. This world requires us to invest in understanding computers if we want to use them.

> The Jobs "bicycle for the mind" line was about the user experience, not the developer experience.

I used the phrase because the conference alluded to it, but I spent a while researching it and would rather not have. You're right that Jobs used it in a blunt, superficial way: https://www.youtube.com/watch?v=rTRzYjoZhIY. But he was also using the phrase to refer to technology from Xerox PARC that he did not fully appreciate. I'm glad the term has taken on a life of its own and grown to mean something closer to what PARC intended.


Mandatory link to the relevant Spolsky article, The Law of Leaky Abstractions : https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a...

> the fallacy in the argument is that implementation details are relevant to everyone.

> They aren't. They're relevant to system builders and people who like tinkering because that is a separate domain to the user domain.

> [...] breaks down when the user domain and the tinkerer domain overlap, as they do in professional software development

Agree. The degree to which the two domains cross over, depends on, well, the domain. A driver doesn't have to know much about the workings of a car to pass their test, but as I understand it, pilots do have to have a solid understanding of the workings of their aircraft.


I agree for all the 'appliance' stuff; and a lot of times that's what you want. But I think also we should be sure to enable and flood computers with "play" spaces--which are about such 'tinkering' but I think it is much; much more than just tinkering--it's really what enables us to express our humanity IMO.

And I think that's actually all types of tinkering; technical or not. And for a lot of 'use' you want to tinker but want the technical part to work; which I agree with. But we still maybe want to keep them open, inviting and friendly somehow.

Just some thoughts along this rabbit hole I've been diving into lately--"game studies," "play," etc.-- https://publishup.uni-potsdam.de/frontdoor/index/index/docId... collects some references.


It is weird, this made me immidiately think about Rust: when they tried to solve a "hard problem" in the language, instead of hiding it away, they make it front and center and try to give you a comfortable way of dealing with it — but you cannot ignore it.

A good example would be that functions of the Std library that deal with the Filesystem often return an OsStr instead of the Str/String Rust uses elsewhere. This means you cannot ignore the fact that the stuff the OS gives you might not be valid UTF-8 bytes. All functions that can be interrupted from the outside or fail, return their values wrapped in a Result Type and make you handle it.

I think this is an example of good encapsulation — when something is a hard problem, it doesn't get easy suddenly, there are just clean ways of dealing with it. I learned a lot just by looking at how they did things and reading their explainations why they did it differently than other languages. In fact Rust teaches me so much about programming that it would've been worth to learn for these lessons alone..


>A good example would be that functions of the Std library that deal with the Filesystem often return an OsStr instead of the Str/String Rust uses elsewhere. This means you cannot ignore the fact that the stuff the OS gives you might not be valid UTF-8 bytes. All functions that can be interrupted from the outside or fail, return their values wrapped in a Result Type and make you handle it.

This is the essence of why optional/result types lead to more stable systems than exceptions.

With exceptions they can come from anywhere, you don't know if you've handled all of them (depends on the language). It's goto with a new name.

With result types you're forced to deal with corner cases, or if you unwrap, you know you've omitted some corner cases, but it's still very visible what you did in your code.

With exceptions your code looks clean when it's dirty, with results and optionals your code looks dirty when it's dirty.

This is a talk about F#, but it explains the concept of railway oriented programming quite well. It's a paradigm for keeping your data flow clean with result types and optionals.

https://www.slideshare.net/ScottWlaschin/railway-oriented-pr...


Being curious about internals and giving users the means to understand them doesn't conflict with the goal of a) putting them behind a hiding layer that reduces the aspects a user needs to keep track of and b) avoiding dependence on those internals staying the same.

You can still teach people how things work or you can still have a list of potential implementations of an interface.

Where the internals matter you would either define very strict contracts or not use abstraction mechanisms that hide internals. But where it matters is not everywhere.


In the past (http://akkartik.name/post/libraries) I've tried to distinguish between giving things a name to avoid having to work through details over and over again, and relying on something built by others so one never has to think about the details. But this has been a difficult point to communicate. I usually try to avoid the term 'abstraction' because it's gotten so debased. But here for an academic audience I go with the grain of how people use the word.

Abstraction to me, just makes it easy to delay implementation. There are sometimes when I need to use an interface now but it won't actually be implemented until next week, when the client gets us our API key, or someone else finds time to do it. Using an abstraction means that I don't have to change any of the code that uses the interface, I just have to change the interface. This is also a good way to make a prototype that is easy to optimize, for example by abstracting a very costly computation, you can just focus on making that computation run faster without having to refactor every line of code where that computation is used.

Morty: Aw, geez, Rick, don't you think we ought to abstract away the low-level details here?

Rick: Listen, Morty. Just bburpecause you wrap something in an abstraction doesn't mean the complexity "goes away". It just means you've put off dealing with it -- usually until it fails. And when it does fail, you now have another layer of shit to dig through to find the root cause. Walk into the fire, Morty. Embrace the suck. And think about how you will handle error conditions before you write the happy path.


That's hilarious. Is it a real quote from the show?!

(I'm the author of OP.)


Nah, I made it up. I've noticed a tension between two dominant programmer archetypes in my career -- one naïve and inexperienced, but well-meaning and careful (the programmer I'm supposed to be), the other more experienced, but jaded by the passing of frameworks, methodologies, and other trends, who just wants to hack things into working order (the programmer I appear to be becoming as I age). They were pretty good fits for Rick and Morty.

Quite coincidentally, the personas Microsoft used for their developer tools were called (in order of increasing experience) Mort, Elvis, and Einstein.


This was/is a failure of pedagogy. I was taught abstraction as a virtue, embraced it, learned this lesson, and then noticed that the experienced programmers (not the pedagogists and professional authors) had been saying all along that until you had seen three distinct cases of something in the wild you were in no position to create an abstraction for it.

I know very little about “mu” specifically, but the title is a highly under-appreciated point. I often notice a tendency to abstract complexity in clean looking interfaces. While that’s convenient for simple projects with pre-envisioned use cases, when using the tool in some novel way often results in bugs due to the abstraction being leaky. For that reason, I personally prefer designs that are transparent, rather than one that attempts to paint a pretty exterior on top of internal complexity. (Obviously correct, rather than not obviously wrong)

This leads to a subtle but crucial distinction between software “automating processes” without needing any humans in the loop -vs- amplifying the (expert) humans in the loop to do more stuff.

Another subtle (but arguably more important) effect comes from McLuhan’s insight that the medium strongly influences the message. If the medium/tool is not transparent, then so are the biases imposed on the tool user. While that doesn’t lead to obvious bugs, it leads to strutting blinders on human thought processes and creativity.

BTW, one last point, in case it isn’t obvious: this goal of transparency in software can be seen as underpinning the free software movement.


I’m looking forward to reading this paper. I feel like I’ve found my kind of people. I’ve had this half baked idea about how people should be able to see what’s going on in their computer, and trying to picture what that would look like. At the simplest level it’s the difference between seeing a trace of what the computer is doing when it boots up vs being shown a pretty screen which freezes up. Another is just the way so many software products are like “let us worry about the details”. I want to be able see everything from the bytes in memory to what’s going on in every level of the system and want a user interface that lets people do that.

Check out the work of the MetaCurrency Project

http://MetaCurrency.org


"Our approach to keeping software comprehensible is to reduce information hiding and abstraction, and instead encourage curiosity about internals."

This sounds rather vague, "encourage curiosity about internals". If I don't need to understand the internals why should I spend time trying to understand them? Isn't it better to "hide their details" into another compilation unit or module, which only exposes its API through an interface so I can easily understand WHAT it is doing, without having to understand HOW it does that?

It sounds like they suggest instead of sub-routines/procedures/functions I should have a single big main program, because those sub-routines/procedures/functions would be hiding information from me.


No, I use functions: https://github.com/akkartik/mu#readme

I just propose to not maintain 3 levels of functions where one will suffice, just to preserve some "interface" for "backwards compatibility".

You aren't forced to understand internals. But it is an explicit design goal to make the internals comprehensible against the day you want to learn about them.


Are you proposing to break backward compatibility?

Well, it's a new computer so there's no notion of compatibility.

But yes, I'm proposing to rely a lot less on compatibility going forward. A big reason for building a new stack rather than building atop existing interfaces is precisely to avoid backwards compatibility. A big reason to keep the codebase comprehensible is to shift the social contract, so that people can fend for themselves in the face of incompatible changes. By the same token, a big reason to avoid compatibility is so that we don't have to indefinitely make the task of future readers more difficult.

I think the age of computers is just starting. 99.999% of computer use is yet to happen. It seems shortsighted to hamper the future for the sake of the miniscule past. You wouldn't want to force the adult to hew to the short-sighted decisions of the toddler.


> Well, it's a new computer so there's no notion of compatibility.

> It seems shortsighted to hamper the future for the sake of the miniscule past.

These too statements seem somewhat contradictory to me. If we don't provide means for making systems where it is easy to validate their backwards compatibility then it becomes difficult to EVOLVE the system incrementally. That would seem to me to "hamper the future".

This is how biological evolution became a great success, not by "starting from scratch" every time but by making sure that new individuals were always more or less "backwards compatible".

Yes sometimes it makes sense to rewrite something from scratch but forsaking the idea of reusability, which is very closely related to the idea of "compatibility", just seems to me as hampering the future.

Not preparing for the future by not making sure you can easily ensure and validate backwards compatibility, is "hampering the future", in my view. :-)


We'll have to agree to disagree.

You're right that there's a tension between present and future. But that's always true. My statements aren't contradictory in isolation, they just reflect this underlying tension. You have to navigate it no matter what point you choose on the spectrum.

> If we don't provide means for making systems where it is easy to validate their backwards compatibility...

This begs the question of why you need to validate backwards compatibility. What benefits do you get? If we have a list of benefits we can start working through them and figure out if something else may substitute for compatibility.

> This is how biological evolution became a great success, not by "starting from scratch" every time but by making sure that new individuals were always more or less "backwards compatible".

This analogy isn't helpful, because evolution is constantly testing lots of different approaches all at once, extremophiles and bacteria and cheetahs. While many lessons are shared between simple and complex life forms, there is also a lot of branching where different organisms come up with disjoint solutions to the same problems. Human supply chains are not nearly so redundant. Lessons from evolution don't always apply to something that's 'intelligently' designed.

> ..the idea of reusability, which is very closely related to the idea of "compatibility"..

It's worth separating these two ideas. You can have reuse without compatibility (reworking an existing codebase to fit an entirely new interface) and compatibility without reuse (cleanroom implementations of protocols).

Reuse is a difficult word to wrap my head around because it encompasses so much:

a) Calling a function at a new call-site.

b) Selecting a library for a use case it was designed for.

c) Mangling a library to support a use case it was not designed for.

d) Deploying copies of an AMI.

e) ... and much more

I'm very supportive of post-deployment reuse (e.g. d above) which is usually entirely automated. But I'm suspicious of large-scale reuse during the development process. We've been trying to make it work for decades now, and it just hasn't panned out. At what point do we reframe a goal as a delusion? I think we're way past it in this case.


> But I'm suspicious of large-scale reuse during the development process. We've been trying to make it work for decades now, and it just hasn't panned out. At what point do we reframe a goal as a delusion? I think we're way past it in this case.

Software Reuse is not a "goal" but common well-understood practice. Call a function from more than one location, instantiate a class more than once and call its methods from several locations. We do that because it is helpful for development and maintenance, not because that would be our "goal".

I would think most software written say in Java uses that approach, in practice. Create multiple instances of a given class. So what would be the evidence that this approach has not "panned out" and is "delusional goal"?

Maybe your "avoid reuse" is a better approach. But I don't think there's a lot of evidence that would prove it to be so, not at least yet.


I'd argue there isn't a lot of evidence either way. What you call "common well-understood practice" I strongly suspect is just cargo culting. As evidence: it doesn't seem to make any projects turn out better. All projects eventually suck over time, no matter how well they follow "best practices".

If you're copy-pasting code everywhere rather than creating a function, that's obviously terrible. But on the other hand we've all also seen functions with twelve arguments and 3 boolean flags that are trying to do too many things at once. Is there any "practice" that gives us good guidelines on at what point to fork a function into two? I haven't heard of any such. The fact that our "best practices" are one-sided is a strong sign that they're more religion than science. Science reasons about trade-offs.

Software isn't yet much of an evidence-based science. There's just too many variables. We don't know how to reason about the interactions between them.

Here are some writings on the dangers of over-abstraction. At the very least they're evidence that I'm not alone in these suspicions.

http://www.sandimetz.com/blog/2016/1/20/the-wrong-abstractio...

http://programmingisterrible.com/post/139222674273/write-cod...

http://bravenewgeek.com/abstraction-considered-harmful

http://dimitri-on-software-development.blogspot.de/2009/12/d...


> All projects eventually suck over time, no matter how well they follow "best practices".

Yes they (often) "suck" but many of them work as well, produce widely used software.

I would not call subclassing a "best practice" but a widely used form of reuse. It is a useful feature to have in your toolbox, in my experience, having programmed in Java and other OOP languages.

I think it boils down to "There is no Silver Bullet". Reuse is not a Silver Bullet. But it is "a" bullet. :-)


If we've gone from thinking of it as essential to just "a" bullet, I'm happy. Now we can have a productive conversation about what other bullets this bullet displaces in your toolbox, and whether that's worth it.

I'm not claiming functions are bad, or classes are bad, or inheritance is bad (there is evidence of the last, but it's orthogonal to this discussion). I'm claiming freezing interfaces for compatibility guarantees is often a bad idea. There needs to be far broader awareness of what you give up when you freeze an interface and commit to support it indefinitely.

It's about minimizing zones of ownership within a single computer. Right now our computers have thousands of zones of ownership, and it's easy for important use cases and bugs to fall through the cracks between zones of ownership. Freezing an interface fractures what used to be a single zone of ownership into two.


I believe that this goal can be better achieved by exposing data rather than procedures. Data are much more stable and although the design depends on what you originally plan to do with them, this dependency is not rigid and there's usually many different usage scenarios. Also, data are much more comprehensible and malleable than procedures so quite a few compatibility issues can be solved automatically.

E.g. the primary goal of DTD was to validate XML against a simple grammar, but the same DTD can be used for auto-completion when editing the XML. Same data, different usage. Also, if we want to shift to a more powerful validation engine, e.g. XSD schema, we can parse DTDs and mechanically convert them into equivalent XSDs.


The paper name-drops Ivan Illich and "Tools for Conviviality". If you are intrigued by his ideas, you might be interested in the blog by L. M. Sacasas, The Convivial Society. His writing holds plenty of insight on the intersection of technology and society.

https://theconvivialsociety.substack.com/


I love the effort to make the stack fully transparent and learnable. This seems so important! I wish more people had the ambition to re-invent the basics of our computing platform to see how things could be different.

Now my question is does this learnability hold as the code grows? For example, would the transparency enabled by the stack be reflected in the codebase for a word processor built on this stack? I'd expect not. For one, I'm thinking about how the original program in my Marilyn Maloney demo (http://glench.com/MarilynMaloney/) had good primitives, grammar, and a visual representation, yet the code is still hard to understand and can be made much easier to understand with depicting behavior (the interactive UI I made).

Secondly, I'm thinking of the Alan Kay-ism "architecture dominates materials" i.e. the invention of the arch was much more effective than the invention of the brick. Mu seems like it could be a great, transparent material, but more important to how software is understood and modified is the architecture of how the material is organized (and personally I think this has a lot to do with UI design and human communication more than the technical foundation).


These are good questions! It may well be that there's an architectural breakthrough out there that will obsolete this whole approach. It's one of very many ways it can fail. Think of it as a hedge, at a societal level. What if we aren't able to come up with a technological breakthrough? After all, software has lagged Moore's law for decades now. I think there's a very real danger in the next few decades that software will get regulated like other fields before it, killing the intrinsically motivated aspect of it entirely. That would be a huge loss to society.

We'll have to see how higher levels of the stack develop. I started on this quest building a Lisp, so believe me when I say I can't wait to get to the high-level side of things. There doesn't seem any reason to imagine that we can't have good primitives, grammar and visual representation. My claim is merely that in addition to these interface properties the implementation for them is also important. And it gets short shrift in the conventional narrative.

That said, let's imagine that there's a strong case that we need to give up parser generators (or something similarly high-level) in order to give everybody the ability to understand their computers. I'd take that trade in a heartbeat.


I'd like to read this, but it's triggering my BS radar by the amount of gobbledygook and sentences like "Creating an entire new stack may seem like tilting at windmills, but the mainstream Software-Industrial Complex suffers from obvious defects even in the eyes of those who don’t share our philosophy."

I wrote it for an academic conference (https://2020.programming-conference.org/home/salon-2020) which had as its theme an essay by Ivan Illich (http://akkartik.name/illich.pdf, pdf, 67 pages)

But in the end, your BS radar is your own.


That's fair. But academia is often full of BS, pardon the language, and so the radar needs to be cranked up. With, say, Medium articles, one can skim if it's good or not within seconds. Academic papers can take hours, if not weeks to discover if it's complete BS or whether I'm just not smart enough, and so I can only go off minor cues.

It's generally better to read hard things. They're hard because they have a lot of information density or a lot of gaps. But that also makes people confuse poor readability for quality. Something that has no content tries to be hard to read by stuffing it full of gobbledygook. Whereas something that is trying to communicate an idea often goes through a few revisions trying to simplify the idea. If you have great ideas, you want the user to be expending their brainpower trying to comprehend the idea, not translating it into simple English. (exception is using jargon that's common to people at the right level)


> Academic papers can take hours, if not weeks to discover if it's complete BS

Did you see this recent thread? It is a paper on how to read papers and it was very useful and insightful for me.

https://news.ycombinator.com/item?id=21979350

From the linked pdf:

"Researchers must read papers for several reasons: to review them for a conference or a class, to keep current in their field, or for a literature survey of a new field. A typical researcher will likely spend hundreds of hours every year reading papers.

Learning to efficiently read a paper is a critical but rarely taught skill. Beginning graduate students, therefore, must learn on their own using trial and error. Students waste much effort in the process and are frequently driven to frustration.

For many years I have used a simple ‘three-pass’ approach to prevent me from drowning in the details of a paper before getting a bird’s-eye-view. It allows me to estimate the amount of time required to review a set of papers. Moreover, I can adjust the depth of paper evaluation depending on my needs and how much time I have. This paper describes the approach and its use in doing a literature survey."


Software does suffer from obvious defects. Though you may not get total agreement on what those obvious defects are, you would get some substantial agreement at the same time.

One of the major effects in software right now is that we pour such a staggering amount of effort into our current forms and patterns that it can create an overwhelming barrier to "better ideas", because there's just no way that even a much better idea can come out of the gate and compete against the behemoth of all the work done with the current paradigms. So we're stuck with patterns written into our systems from 40-50 years ago that can only very slowly change.

As a more concrete example, while my criticism of some things like "visual programming" as "the brand new paradigm that is going to revolutionize programming because nobody has thought of it before" has gotten down to the point I can virtually just copy & paste them to almost anyone who tries them (in the spirit of [1]), the flip side of my polished criticism is that comparing software platforms with literally billions of person-hours poured into them to some guy's hobby project, itself by necessity mostly built on the platform that embodies the wrong paradigm and tends to force itself on to the new project in various ways if it want to come out of the gate able to deal with real problems like "parsing XML" or whathaveyou, isn't necessarily a fair representation of what the project could be if it also had billions of hours poured into it. Unfortunately, as is the way of these things, if it's going to "revolutionize programming" it's got to actually be able to beat the current systems in some useful manner, fairly conclusively, right out of the gate.

As another example, no points for guessing that in a hundred years, computer CPUs aren't going to look much like a modern CPU. Perhaps there won't even be a "Central" processing unit anymore. But I promise you the current architecture will probably hang on longer than you'd guess (I wanted to say "50 years" and honestly got a bit gunshy about that... for all their orders of magnitude more transistors modern CPUs are clearly related to what we had 50 years ago today!). It's going to be hard for that new paradigm to compete with something with so much effort poured into it.

[1]: https://trog.qgl.org/20081217/the-why-your-anti-spam-idea-wo...


It's ironic that the presentation is not very see-through.

I really wished such papers had double spacing, or better still, the ability of adobe reader to allow you to adjust such spacing - that would be an amazing feature as for some layouts, they are easier to read with better spacing between the lines - at least for me.

My bicycle has been Ruby on Rails for over 10 years and i think it is hard to beat. most all of the underlying framework is easily inspectable and changeable to suit needs, while still providing sharp knives that one could use to cut themselves if not careful.

But I think this push for "stimulating curiosity" is academic wank. That's not what bicycles of the mind are for - they're tools for building, not naval gazing. It sounds like Mu was built more for ego or prestige than to actually do anything meaningful.

Show me Mu as a bike I want to ride - not as a philosophical introspection on why Mu is in some way superior. Reading this PDF it's clearly more ideal than practical. And while it may be "see through" it's shape, interfaces, interactions and surface area are convoluted and poorly presented.

Maybe I am wrong to expect humanities out of this, but humans are who have to use this stuff/ride these bikes.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: