
Brain Oriented Programming - pbw
https://tobeva.com/articles/brain-oriented-programming/
======
legerdemain
This is a nice story, but the reality is that popular languages and libraries
tend to expose API surfaces that are broad and shallow, not ones that are
narrow and deep.

For example, a Python string object has something like 40 methods. I counted
about 85 instance methods on org.joda.time.DateTime from Java.[1] Turning to
JavaScript, every method in Underscore.js lives in a single gigantic
namespace.[2]

Experience suggests two related factors that favor the proliferation of big,
shallow APIs.

Modern IDEs offer rich auto-completion with pop-up docs. This makes it much
easier to look at the available list of fields and methods and choose the one
that's appropriate.

A second, related reason is that IDEs generally _don 't_ have tooling that is
nearly as rich to help you identify the chain of calls you need to make to
obtain an instance of some type that has the method you want. Do you get a
Connection from a ConnectionContext, a ConnectionContextManager, or a
ConnectionContextManagerCache? And how do you get one of those?

Shallow APIs and big namespaces tend to favor easy lookup. Deep APIs where
every type only has a handful of methods make you memorize them.

[1] [http://joda-
time.sourceforge.net/apidocs/org/joda/time/DateT...](http://joda-
time.sourceforge.net/apidocs/org/joda/time/DateTime.html)

[2] [https://underscorejs.org/](https://underscorejs.org/)

~~~
waheoo
Shallow or flattened structures are always what I default to.

The example given does look better, but it's far more complicated to actually
use.

Focus on flat denormalized structures as your default, its simpler and more
discoverable.

At some point it does get unwieldy, at some point it is worth to find a seam
and refactor out a chunk of logic.

The difference is you're using the codes modularity needs to guide what gets
defined where, not some arbitrary idea of what looks neat and organised,
because what looks neat and organised is usually not very flexible.

~~~
TheOtherHobbes
I suspect OP is confusing number of attributes with number of abstraction
layers.

Attributes are not a problem as long as they're listed with intent and not
just thrown into a bucket, and there's some rationale for their existence.

It's much harder to trace flow down and up through multiple abstraction and
encapsulation layers. And it's harder still to remember the layers for
different classes/applications. And it's _even harder_ to do this when all the
class names are related and hard to tell apart.

~~~
pbw
Hierarchy and abstractions have costs, no doubt. I didn't get into that in the
article, but the article was only 1500 words. I can imagine a long "yes,
but..." section that explores the risks and downsides of small objects. It's
possible to write total crap software using only small objects for sure. I
don't advocate doing that. That's where the Bonsai metaphor comes in, that's
as much about shaping and pruning as growing. Uncontrolled growth of small
objects has a name: cancer.

------
wruza

      (start_ns, end_ns) -> Span
      (process_id, thread_id) -> Origin
    

Nope x10. Before that transformation I got what the structure does instantly,
but after that I have no idea what span and origin exactly are. You cannot
make things simpler by adding nomenclature. Remember that "show me your code"
vs "show me your data" thing? This hides your data, and now readers have to go
to definition and remember +2 references to get the whole picture. It may look
nice on a type diagram, where that's obvious, but not in code.

But that shouldn't defeat the article's meaning. I think that the same rule
can be applied to _groups_ of related attributes instead. When you have 5-7
groups per structure, it is okay, but when these groups exceed that limit,
that's a signal for rethinking your god object.

Group things:

    
    
      self.name = name
      self.category = category
      self.args = kwargs
      self.phase = phase
    
      self.start_ns = start_ns
      self.end_ns = end_ns
      
      self.process_id = process_id
      self.thread_id = thread_id
    

And they'll be much easier to grasp. Your brain will do that span/origin
grouping in background, in terms that it definitely knows, but for which it
may not even have a vocal association. It may not be able to remember 10
_different_ things, but it will remember that thread_id is coupled with a
process_id, because you already have that knowledge in a long-term memory.

~~~
pbw
I love that. Chunking by textual grouping is great. Sometimes that is totally
enough. One really nice thing about sub-objects though is you can pass them to
and from functions like:

    
    
        info = get_owner_info(event.owner)
        info.record_duration(event.span)
    

I usually like that better than:

    
    
        info = get_owner_info(event.process_id, event.thread_id)
        info.record_duration(event.start_ns, event.end_ns)
    

If several attributes are passed around together through several functions,
I'm much more likely to consider making a sub-object. It's 100% true that sub-
objects and hierarchy have costs of their own. Nothing is free. If the sub-
objects are not pulling their conceptual weight they should probably be
removed. I used the Bonsai metaphor because those trees require constant
trimming and shaping. There's a name for unchecked growth, budding sub-objects
willy nilly: cancer.

The bigger thing though is pushing functionality down into the sub-objects.
For example if my server has these three attributes:

    
    
        self.connection_address
        self.connection_start_time
        self.connection_status
    

I'd be __highly __likely to create a Connection sub-object because I 'd feel
very confident that I'd want it to sprout methods like:

    
    
        self.connection.is_alive()
        self.connection.drop()
        self.connection.get_duration()
        self.connection.stats.get_total_bytes()
        self.connection.stats.get_mbits()
    

My article only talks about objects and attributes and doesn't say too much
about methods. I think having lots of methods on an object is often fine.
Numpy's ndarray object has around 15 attributes but over 50 methods. I don't
have a problem with that, especially for such key object.

There's a big difference between attributes and methods. Adding an attribute
to an object increases the complexity of every other method in the object, and
every future method yet to be added. As the article says an attribute is just
a global variable to the object, adding even a few globals can tip the scales
to where the object is a confusing mess. But adding a method is fairly
harmless, it doesn't really make the other methods more complex.

~~~
thelazydogsback
>Chunking by textual grouping is great. Sometimes that is totally enough

Agreed, but much prefer object-first vs. verb first:

owner_info_get(...)

Having 600 methods that start w/"get" and 450 that start with "set" doesn't
help docs or autocomplete. I remember the Win32 API books were organized
alphabetically and it would have been comical if it weren't so useless.

------
throwaway13337
'args' being one of the arguments rings alarm bells beyond anything written in
the article.

Python libraries make it often unclear which arguments are possible. You have
to hope it's in the docs or go digging around in the code.

It's extremely common and a horrible part of the python community.

Why is this standard?

~~~
pbw
I totally agree, but in this case the args precisely are key/value pairs that
appear the in the chrome://tracing GUI under the heading “Args”! So it
actually works pretty nicely. But over use and misuse of args and kwargs is a
huge problem in Python I agree.

------
bedobi
Uh... Poorly composed bags of stateful attributes that are globally shared all
over the place don't become OK or easy to understand or work with just because
each of them have seven such attributes or less.

~~~
pbw
The Bonsai metaphor in the article agrees with your point, you don’t want
poorly composed bags shared all over the place. You want a carefully and
continuously groomed tree of objects that’s well designed and well cared for.

------
userbinator
_The biggest trap of software development is that it’s easy, trivial in fact,
to write software that you yourself cannot understand, and in turn no one else
can understand._

An interesting counterpoint:
[https://www.linusakesson.net/programming/kernighans-
lever/in...](https://www.linusakesson.net/programming/kernighans-
lever/index.php)

I've never had problems with too many fields or even global variables. A
recent "weekend project" I did of medium complexity (video decoder) has around
50 globals and zero actual "objects". That might be a bit atypical since it's
mostly a set of nested loops and not as "branchy" as usual "business code";
but on the contrary, I've had problems with the ultra-deep callstacks and
massive indirection that this sort of "micro-structuring" tends to produce.
The micro-complexity goes down, but the macro-complexity goes up and it
becomes harder to see the state of the whole system at once, which is
extremely important for debugging. Following data as it flows between multiple
objects and jumps around between functions is much more difficult than
scrolling slowly through a long section of mostly straight-line code.

I wonder if _being able to see more of your program at once_ , as APL
programmers often espouse, is really the key to effective programming. You may
only be able to hold "between 5 and 9 things in your brain at any one time",
but to be able to see the other 40+ with a single glance at the code seems
much more important to me. To use my example of a video decoder above, the
existing implementations I could find were all multiple files and multiple
functions in each file, and to study how they worked was rather tedious. When
I wrote my version in a single file with very few functions and objects, it
seemed much simpler, and my sense of understanding increased. It was a similar
feeling of epiphany as when I first saw
[https://news.ycombinator.com/item?id=8558822](https://news.ycombinator.com/item?id=8558822)

~~~
TeMPOraL
Agreed. When I program on larger codebases these days, this is my number one
desire: I wish I could see the structure of the _entire_ program at glance,
and seamlessly navigate it. Unfortunately, there are no tools for that (that I
know of). In the absence of such tooling, each extra layer of indirection
makes the program fractally harder to understand and follow.

The same applies to runtime. For now, the best trick I have is building the
codebase, then launching the program in a debugger, setting breakpoints around
areas of interest, and then stepping it, exploring the call stack and the flow
of data. But debugger UIs aren't exactly friendly when you're exploring, and
not actually debugging.

(Sometimes, but very rarely, I can get away with running a tracer on a
program. E.g. for Common Lisp, I wrote myself a small tracing profiler[0], and
half the time I use it to explore the control flow rather than chasing
performance issues. Fortunately, "time travelling debuggers" are getting more
popular as a concept, so maybe good tooling will become more available in the
future.)

\--

[0] - [https://github.com/TeMPOraL/tracer](https://github.com/TeMPOraL/tracer)

------
TeMPOraL
It would be great if it was just so simple. The advice from that article ends
up punting the complexity into hierarchy of objects/types. Which would be fine
if you never needed to be in several places of that hierarchy at once. Except
in practice, you almost always do.

I've worked on a large C++ codebase that went with the "lots of small objects,
strong typing everywhere" approach. Classes rarely had more than 5 fields or 5
methods. Data was aggressively transformed and narrowed so that, within any
given function, you had in your hands a bunch of objects that contained only
the things you needed.

The experience was me slowly going insane from the sheer amount of jumping
around files and the need to keep in my head 30 different strongly-typed
aliases to std::string.

I find myself leaning towards a belief that there is such thing as excessive
subdivision, excessive separation. If you treat 7 +/\- 2 as the size of your
brain's L1 cache, then _what you see in front of you_ is an L2 cache.
Constantly swapping things between L1 and L2 is expensive, but nowhere near as
expensive as between L1 and RAM (all the other files in your project you
aren't looking at this very second).

\--

Not to mention, objects don't communicate lifetime hierarchy well. If you
split your Foo class into Foo, Bar, Baz and Quux, just to keep the size of the
interface below 7, then when I see a Bar and Quux appearing somewhere in the
code, it's not obvious to me that these are part of the same thing and always
used together. Perhaps in a perfect world I shouldn't care, but most of the
time I really do. This is probably a more general problem of OOP - structuring
your program into a large graph of small objects, in conditions when you have
near-zero visibility into how the entire graph looks, and you're forced to
deal with only small pieces of it.

That picture in the article labeled "Complexity grows without bound"? That's
exactly how OOP feels to me when you follow the proper OOP practices.

\--

Edit: 'TheOtherHobbes expressed it perfectly here:
[https://news.ycombinator.com/item?id=24167120](https://news.ycombinator.com/item?id=24167120).
Attributes are not the problem. But jumping up and down the ladder of
abstraction, breaking in and out of encapsulation layers - that's what's
mentally taxing.

~~~
esperent
> If you treat 7 +/\- 2 as the size of your brain's L1 cache, then what you
> see in front of you is an L2 cache. Constantly swapping things between L1
> and L2 is expensive, but nowhere near as expensive as between L1 and RAM
> (all the other files in your project you aren't looking at this very
> second).

As soon as file grows beyond a hundred lines or so, most of it is off-screen
so it's not that different to putting it into another file (it's no longer in
L2 cache to use your analogy).

It's a choice between jumping around in a single big file, or jumping around
between several smaller files organized in a folder tree structure. I have the
folder tree open to the side of the editor while I work on the current
document so that's also in my L2 vision cache.

Personally I prefer that to one big file that I have to keep scrolling/jumping
around in. Of course, for a large file I can open an outline view instead to
help me jump around in the file. But I prefer files that are short enough so I
don't need that.

~~~
TeMPOraL
A file growing too big is indeed a problem, but it's partially mitigated by
the fact that things in a single file are all directly related to each other
(if they aren't, it's refactoring time). An outline view or even code folding
also helps. On top of that, I can scroll within a single file quickly using
single keystroke.

Still, if you can keep a tree of all your source files entirely visible on
screen, then your codebase is rather small. The kind of codebases I'm talking
about, the tree doesn't fit. You have to actively search it to find the files
you need. And each time someone decides a class isn't "small" anymore, the
tree gains 2 or more extra files.

------
drenvuk
That digit span test is incredibly irritating. I've done it 10 times by now
and I can't get more than 9 digits.

On a more philosophical note, I've wondered if there are some outstanding math
or even societal problems that could've been solved already if humans had the
capability of holding onto 13 or 15 more things at a time rather than just
7ish. It always feels like we're going to be limited in some way with regards
to that if the problem is incapable of being broken down or collapsed.

~~~
pbw
Yeah very few people can get more than 9 digits consistently, that's the whole
point really, 7 plus or minus 2.

Interestingly chimps do have better working memory than we do, checkout this
crazy video:
[https://youtu.be/nTgeLEWr614?t=8](https://youtu.be/nTgeLEWr614?t=8)

------
mojuba
Great article, a lot of interesting thoughts.

Another useful way of thinking of entities we deal with - variables and
functions - is their properties. For example, it's one thing if you have 7
references, but things get more complex if they all are nullable. Nullability
adds a new dimension and you may well be dealing with 14 concepts rather than
7 (`v` referes to an object of type `V`, but what if it's null?).

Similarly you may have variable `i` that's an index in some array `a`, however
the possibility of `i` being out of bounds is an additional concept for the
brain.

In other words, every extra "what if" adds complexity. In this regard, high-
level dynamic languages are in a big disadvantage since any entity has a
possibility of having an unexpected type with an unexpected value.

Languages that allow reducing the scope of possibilities on the other hand are
more advantageous: you declare things with precise types and e.g. as non-
nullable wherever possible, you use enums wherever possible, you define
variables as computable whevever possible (Swift is sweet!), you avoid
unnecessary side effects with more functional-style code, etc. - all that
reduces the number of concepts for the brain to deal with.

~~~
Multicomp
You just explained why I first fell in love with F#. I've been trying to
figure out how to do that for years.

I didn't particularly care about functional style programming although I have
learned that it has its uses, but the fact that I can offload some of these
questions to the compiler rather than keep them all in my head helps me
immensely.

------
randcraw
The premise of the article is that the human brain's 7+/-2 short-term memory
rule should govern how software is designed.

That's the silliest thing I've read in a long time.

Unlike a sequence of random numerals, software is composed of _meaningful_
components and roles -- modules, processes, screens, functions -- each with a
well-defined and distinct job to do. Forcing these into a synthetic hierarchy
with a fan-out no greater than 7+/-2 makes sense only if the code reader is
trying to memorize each layer as if they were no more than random symbols --
meaningless and without context.

That's the least sensible strategy for software design I can imagine. If easy
memorization is the objective of S/W design (which is debatable at best) then
it's far better to shape the components into names with implied relations that
are already familiar - like design patterns, or better yet, the actors in a
compelling novel. Then the natural interplay of opponents and allies and
obstacles and desires will make for a much more coherent and memorable
narrative that will be far more intuitive than 7+/-2 randomly chosen labels
ever could be.

~~~
pbw
I never mentioned "memorization" of code in the article, and I don't feel that
"memorizability" is at all a goal. In my mind the goal is learning and
understanding and familiarity and navigating the landscape of the design, and
these things are easier when things are nicely factored into bite-sized
pieces.

Speaking of bites, it's very much like eating. You only put a certain amount
into your mouth, then you chew and a swallow that, if you try to stuff a giant
hamburger into your mouth all at once you might choke. If you are forced to
puzzle over one giant monolithic object after another it will be very
difficult to learn the system, but if the system is nicely factored it will be
easier. That doesn't sound at all controversial to me.

What do you feel is a good range for the number of attributes in an object? Do
you feel an object with 100 attributes is just as easy to understand as one
with 5 attributes? That seems unlikely to me.

------
cjfd
100% correct article. The compiler/runtime does not care if all of your
variables are global and all functions modify all of them. Almost all program
structure is there for humans and not for computers.

------
ryanmarsh
I get the feeling that many programmers look at programming as the act of
imparting a superior program (their vision) from a superior machine (their
brain) into an inferior machine (the computer).

I see an ape banging on a particle accelerator.

Still, we know a lot about how the ape’s brain works. You’d think we’d have
designed programming languages and even syntax highlighting schemes that
empirically lead to faster development and more correct programs. Surely we
could devise a paradigm that is empirically better for sense-making, but here
we are still arguing about semicolons and editor preference.

Programming is so tribal, religious, prone to fads and poor outcomes that it’s
cringe inducing when SWE act as if they’re superior to other people or
professions. Everyone who hasn’t had to pay programmers for an outcome thinks
they must be geniuses, everyone who has thinks they’re single celled
organisms.

~~~
pbw
This seems like the tip of the iceberg towards some interesting ideas. Have
you written anything up online?

~~~
ryanmarsh
I started work on a book a few years back but never published anything. I’m
guessing I should.

------
mrkeen
> The key thing to realize is a single object with a lot of attributes is
> itself not Object Oriented. Instead, it’s a 1970s style Structured Program
> in disguise. The attributes of the object are the global variables of the
> program, and the object’s methods are the program’s functions. Every
> function can freely access every global variable which is what causes many
> of the problems.

Yes!

------
lucteo
Nice article.

But, beware that grouping things also has its costs. It can grow the
complexity of the software both vertically and horizontally.

See also [http://lucteo.ro/2019/01/19/golden-mean-in-software-
engineer...](http://lucteo.ro/2019/01/19/golden-mean-in-software-engineering/)
and also [http://lucteo.ro/2019/02/16/clean-code-ne-well-
engineered/](http://lucteo.ro/2019/02/16/clean-code-ne-well-engineered/)

------
hliyan
I believe it was Bjarne Stroustrup who said that programming is the art of
writing libraries and then using them. After two decades of programming, this
still holds true for me. The best way to handle complexity is to write and
verify layers with a well defined API, to be used in the layer above it, much
in the same way the protocols in the TCP/IP stack are built on top of each
other. Once a layer is verified, you can flush the complexity it encapsulates
from your brain and focus on the layer above it.

------
agronomov
Nice article, but I wonder what author would say about working with ORMs. When
objects represent database tables it's not as easy to extract attributes into
objects. Every extract, rename or move is a database migration, when you
probably want to keep the old columns around for a bit in case things go
wrong. With author's philosophy in mind it feels it's quite easy to bury
yourself in that process instead of adding direct business value.

~~~
pbw
I really have not worked with databases a ton. But I do a think a huge thing I
didn't address in the article is how readily are you even able to make
changes?

I'm really talking about a system where you free to change things. Once stuff
is exposed to the outside world you might relinquish your ability to keep
factoring and factoring. Although that's a reason to only expose an API and
not your actually object structure.

I'm not sure how this all related to databases, but I suspect in many cases
there you simply can't change existing things, too much would break. So yeah I
think that's a different ball game to some degree.

------
philipswood
Hmmm...

"Look! I've reduced complexity; decreasing the number of object members by
increasing the number of classes...

This only works if I don't need to think about the new objects much.

~~~
pbw
It's a bad idea to add objects/classes/structs for no reason. If they are not
pulling their conceptual weight they should be moved. Hierarchy has costs,
flat is sometimes better.

But as I wrote in the article attributes are like global variables to an
object. Every boolean attribute you add to an object doubles the number of
possible states. An object with 10 boolean variables can have 1024 states.

I find it's better to tamp down on the number of attributes by carefully
introduce select sub-objects that have value beyond just reducing the number
of attributes.

What has your experience been? How many attributes makes you start to wonder
if a sub-object would be appropriate?

~~~
philipswood
Agree in principle.

Nice article - I like the idea of designing code (or looking at code) with the
perspective of how it gets parsed by a human mind.

Great code _looks_ super simple, effortless, until you try to write something
similar to it and realise that there is a lot of effort in factoring it to be
that simple.

That said - I don't think minimising object members are the main point of
attack.

I don't find the number of _members_ to be that taxing to working memory. It
seems to be naturally chunked: the members I'm working with and everything
else. I don't need to keep all the members in mind. It's like stuff in a
drawer - it's in one file so I can rummage to find the bits I need and keep
them "in hand", going a small factor over the working memory limit is fine.

(I'm not saying there isn't a retrieval cost, more members increase retrieval
cost, potential confusion, etc. etc.)

What seems to be a lot more taxing is keeping reference to relationships to
other objects. Stuff NOT in the current file.

Having to keep the relationships between a number of other objects/classes in
mind is more taxing, because those need to be in working memory - and putting
them there either means they are in long term memory (e.g. with a known
codebase or library), or parsing it out by navigating through the codebase,
tracing the relationships and understanding the intents.

Ironically super-factored code can be a lot of effort to "get" at first, while
intermediate quality code reads pretty easily.

I find my mental map of a nicely factored object is more tied to it's
intent/purpose or "meaning" rather than just the raw number of its members.

Your PerfEvent example I'd parse it the same way, though.

------
_ink_
I can relate very much. In my company I am working on a complex distributed
component as single dev. I have created exactly what the article mentioned:
overly complex structure, which I can hardly understand and surely nobody else
will.

How can one train to write well structured, easily understandable code? It is
not just code, also overall architecture and data flow.

~~~
slx26
write down the system requirements in paper, really study the problem before
trying to solve it. like, for hard problems you really need to allocate 50% of
your time to planning and system design. focus on modularity, and try to keep
processing and interpretation as separated as possible. it's not the code that
needs to be simple to understand, it's the diagrams representing the system.
if you can recursively break down the system into modules that make sense on
their own and don't mix different concerns (like what I said about processing
vs interpretation: sometimes you don't have an explicit code dependency
between modules, but you have too many dependent implicit interpretations that
the code relies on), you should be much better. the code is not so important
there. for example, a crypto hashing package might contain very tricky code,
but if the surface area of its API is small enough and you understand what
part hashing is playing in your system, having it as a blackbox is not a
problem. that said, with hard problems it's hard to gain enough insight until
you actually make a first implementation and see where your model is lacking.
and this tends to take more time than you often have.

------
tgv
> Thinking in general works very much like vision.

Well, that's a sweeping statement. No reason to read any further. One simple
argument against it: thinking about problems is supported by much more
memories, and of a different kind, than e.g. face recognition, and it's much
slower, too.

~~~
pbw
I believe how thinking works has many parallels with how visual perception
works, because they both use the same underlying hardware. Chunking and
hierarchy are super primitive and very likely are abilities that permeate the
brain.

Kurzweil (I know, I know) in his book How To Create A Mind writes about how we
read:

    
    
        For example, to recognize a written word there
        might be several pattern recognizers for each
        different letter stroke: diagonal, horizontal,
        vertical or curved. The output of these recognizers
        would feed into higher level pattern recognizers,
        which look for the pattern of strokes which form
        a letter. Finally a word-level recognizer uses the
        output of the letter recognizers. All the while
        signals feed both "forward" and "backward". For
        example, if a letter is obscured, but the remaining
        letters strongly indicate a certain word, the word-level
        recognizer might suggest to the letter-recognizer which
        letter to look for, and the letter-level would suggest
        which strokes to look for. Kurzweil also discusses how
        listening to speech requires similar hierarchical
        pattern recognizers.
    

[https://en.wikipedia.org/wiki/How_to_Create_a_Mind](https://en.wikipedia.org/wiki/How_to_Create_a_Mind)

------
Ixiaus
I like the idea behind "brain oriented" code but I'm skeptical of Python's
suitability for the task. A language that gives you explicit types, algebraic
data types, and can separate effects from pure code is a crucial first step
towards taming complexity IMHO.

------
mvn9
7 attributes could be too much. Those attributes belong to a class, not an
object. If those attributes are binary, there can already be 128 objects
within that class. If you have to think about their interaction, you are
already well beyond 9 elements.

------
imhoguy
> "To reduce the number of attributes we introduce two new classes or structs"

So practically we are shifting the complexity to higher abstraction levels.

Let's be honest - programing is complex.

~~~
tchaffee
And where is the problem? Abstraction in programming is one of the most
powerful tools we have. Perhaps the author should have warned about the anti-
pattern of premature abstraction, but it was a very light treatment of the
subject and the real value was the "object properties as global variables"
insight.

~~~
imhoguy
Exactly, abstraction has consequences too: explosion of classes, packages,
modules, libraries, thus hierarchy, wiring, dependencies, lost versioning
trace when moving around things etc. The author assumed we are now saved from
hassle by introducing new abstraction in some naive example. That is not so
easy.

~~~
pbw
The example was a real-world example I encountered the morning that I wrote
the article. The article is 1500 words, I'd love to do a 5000 word version
sometime with more in depth examples.

As for explosion of things, there's accidental complexity and necessary
complexity. Complicated things are going to have complexity somewhere. The
question is where to do put.

The article is arguing for balanced, carefully groomed tree of objects. In
other comments I clarify I'm talking about attributes not methods or APIs.
Flat APIs can be wonderful. Attributes are different story, since they are
global variables that complicate every single method in the object past and
future.

~~~
imhoguy
That is clear. Thanks.

------
imvetri
Disoriented brain programming. :D

