Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why are Black Boxes so Hard to Reuse? (OOPSLA ’94) (parc.com)
82 points by swah on May 15, 2011 | hide | past | favorite | 35 comments


Attempted Summary:

Black boxes are hard to reuse because the hidden implementation details impact the client e.g. GUIs don't scale to thousands of windows, LRU caching is sometimes not optimal.

The proposed solution is "meta-interfaces" that let the client control how the interface maps to an implementation. For example, a cache could represent pages with objects that participate in swapping policy. The client could subclass the page class to provide a custom policy. It's important to separate the base/meta interfaces.

Some other more exotic forms of meta-interface are mentioned involving reflection and dynamic code generation, but no real details are given. It's also acknowledged that the OO approach often results in fragmentation i.e. too many objects.

My thoughts:

The meta-interface concept is pervasive today, in the form of composable object models and dependency injection. But these models are still hard to design and suffer from the fragmentation problem.


I take it you're not familiar with Kiczales's work, Art Of The Metaobject Protocol, nor his work on Aspect Oriented Programming. The meta-interface concepts that are "pervasive" today exist in a crippled form compared to what is being described here.


Why would you assume that? I know of those things and they haven't penetrated much in the 17 years since that presentation. But the patterns they embody have at least become common in the more popular and impoverished languages. This reveals their value to the masses and drives their acceptance as first-class language features.


My impression of why these things haven't penetrated in 17 years -

  * Few people program in Common Lisp
  * AspectJ is an extension to Java


I think Aspects have penetrated pretty well via Spring. They're probably used more for logging than anything else, but it's something.


The problem with a MOP is that it is so general that it is possible to do anything and thus impossible to reason about anything. Kiczales gave an interesting retrospective talk where he talks about this subject and more:

http://bc.tech.coop/blog/060709.html


Kiczales actually points out that the many recent advancements on reasoning about programs (type systems, contracts) can probably be applied here.


Killer read. Reinforces my personal belief that the way out of the dark hole of software complexity are programming languages that:

  * Emphasize protocols, de-emphasize hierarchies
  * Support compile time meta-programming
  * Support flexible verification (optional type checker /
    contracts / predicate dispatch)
The talk also covers the crux of why OO inheritance is the pits - you inherit the broken implementation. Working against protocols, which may seem initially inconvenient, actually allow you to avoid a future tarpit.


Yup. I wonder how much this talk influenced Next's NeXTSTEP design? Certainly a lot of the things he was thinking about have been solved in a fairly elegant manner - late-binding, delegates that implement mappings, reflection, using protocols instead of class hierarchies where appropriate...

Actually, when you get down to it, MVC is all about making client specific mappings for the loading of data into views. The talk is not exactly startling seen from a 2011 perspective, but it's nice to see the reasoning behind so many of today's patterns so clearly expounded.


I have been thinking of such a system for 20 years and I've finally decided to build it. I need brains bigger than mine, for conversation, collaboration, whatever works for you. My email is jamie via binaryfinery.com if you would be interested.


By #1, do you mean something like the message passing in Erlang? It seems like a reasonable fit for 1 and 3, though its macro support is pretty weak.


Doesn't common lisp already give you most, if not all of those things?


Not really. Common Lisp's basic data structures are black boxes. Clojure's headed the right direction by building even the most fundamental bits (like lists) on top of protocols.


That doesn't exist yet, right?


I must admit that I don't find components as black boxes at all hard to (re)use. I find frameworks hard to (re)use.

On virtually a daily basis I drop in a new compponent where I don't know the internals. I do it without thinking twice, for the most part. Half the time I don't even look at docs -- just the interface.

Fx's OTOH are a pain to learn and integrate. And once your bought in, you're kind of locked in.


That talk knocked my socks off and pretty much drove me to my own bastardized version of "Naked Objects"

http://en.wikipedia.org/wiki/Naked_objects

[Edit: upon re-reading it again: in P006 he says blackboxes are hard to reuse because they hide performance characteristics? I can think of a better reason, actually]


I really do not see how black box abstraction is the problem (in the functional sense). Really I see this more as a problem of composability of software. This is what attracted me to say Haskell in the first place over mainstream object oriented languages. Sure a module might expose a top level function that makes a lot of assumptions about implementation behavior. But if you don't like how that behavior is composed, you can always create a new function that would compose the behavior in a different way.

All this meta-interface stuff seems like it is way more complicated than it needs to be... Please show me what I am missing.


This is how I think about it.

Imagine you have a block box that calculates a interest rate futures convexity adjustment. That is for an OTC forward you have one fair value and for a future you have a different fair value because the future is has margins and the function calculates the difference between the two.

Now you have some useful code that works out margin payments for various trajectories of prices for the future/forward based on some model for price movements and and another model for interest payments/reciepts on the margin calls.

Now you have a simple change where some exchange changes way they do margin calls.

One way to resolve this is that convexity adjustment function takes in a margining function as a parameter. But if it was not known that this was needed from the start maybe another part of the model was parameterized rather than this.

In practice in this case assumptions about the margining are baked into the mathematics of the black box, and handling arbitrary margining systems is not done and you write a whole new function.


I still see that as a composition problem and not one of abstraction.


Functions are not enough since functions are implementations. Haskell type classes on the other hand represent the kind of thinking being discussed here.


Totally agreed. I did not mean to imply all you need is functions. Just that Haskell allows for much more flexible and correct composition of software (especially when dealing with side effects) solving many of the issues in the OP without reflection and meta interface complexity. Not to say Haskell is perfect, just that I see this as a composition problem, not a failure of black box abstraction.


Its interesting to look at the whole Visual Basic / VBX market and how those apps were assembled. I am not a big fan, but I wonder if there are some lessons that have been forgotten from that era.


I'm reminded of Clojure's ability to rebind global functions with dynamic scope on a thread local basis, and Ruby's ability to monkey-patch.


The SBCL common lisp implementation (at least, I'm sure it's not the only one) can also do thread local dynamic binding.


What would be interesting would be if some language offered the ability to do thread-local (or thread-heritable) dynamically scoped monkey patching, and closures over monkey patches. So you could make a function that rebinds the implementation of lists, and can return a list that is for example backed by a mmapped file.


Also: what makes Clojure's dynamic binding stand out is you can rebind pretty much anything, not just special variables - if it's implemented as a "var", you can rebind it. That goes for both data and functions.


I've been designing such a system in my mind for 20 years and not telling anyone in case they steal the idea. What a waste of time! I've finally decided to make this happen and I need interested parties to be involved at any level: conversation, collaboration, coding, ridicule. This is jumping the gun a bit, but I really wanted to contact everyone who has posted here because you are all, clearly, thinking at an advanced level about this stuff. I'm jamie, and my domain is binaryfinery.com. Please write if you'd be willing to converse more.


Fascinating presentation. I have a couple of questions essentially about the history of this: 1) was Taligent's ignominious end in some way responsible for the stalling of developing software systems like this and 2) did C's more pragmattic approach lead everybody in a directon that is going to be exeedingly difficult to change - with the one possible outcome is that we have to stay the course until we eventually figure out how to write easily composable black box systems.


I think the problem is actually deeper than the observation here. The core problem is that software engineering is still very immature. We simply lack a common technical vocabulary of robust and well worn models and generalized systems and their most salient attributes. That lack of being able to describe and understand the operation of components without disclosing the mechanism of their internal operation leads to "black boxes" being blacker than they should be. And that in turn leads to people using components in ways that aren't applicable, because the abstraction was too crude. It doesn't require knowing how a flourescent lamp works to know that they're not well suited to say high intensity search lights, or strobe lights. Software uses can encompass changes of many orders of magnitude, for example, but often the scalability of components is not even part of the vocabulary. Often the naive conceptualization is that a given component is usable in every possible context. This idea is patently ridiculous, but we lack the terminology to move far beyond that. Imagine if people thought that merely because some company made A car that you could use that car for everything automobile related: commuting, cross-country rally races, lunar roving, towing tractor trailers, etc. That's the level of sophistication of abstraction models we are at today.


The observation in the OP is quite deep, and certainly addresses all of the points you are making. And it's not a problem of terminology, it's a problem of design. What Kiczales is talking about is the power of descriptions such as this:

  definterface Light
    turnOn
    turnOff
    powerRequirement
There is no black box here as there is no implementation. We haven't committed to any specific misconception an implementation might represent and we've left the door open for massive scale.


And that is what EWD was warning us of - decades ago: http://www.cs.utexas.edu/~EWD/transcriptions/EWD03xx/EWD340....


Interesting how I was missing a term like "mapping conflict", despite facing situation frequently.


[deleted]


I don't see how those are related at all. Care to clarify?


If you mean my domain name, thats just what pays the bills and where you can contact me. Thanks for checking it out though.


black boxes are hard to re-use because by definition you don't know what's inside of them.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: