
Climbing the infinite ladder of abstraction - _qc3o
https://lexi-lambda.github.io/blog/2016/08/11/climbing-the-infinite-ladder-of-abstraction/
======
buzzybee
After climbing for a while, I went down the ladder from whence I started and
try to use fewer features to do more. My code aims for reusability by cut and
paste as the first step - not as the end point, but as an important validation
of whether the implementation can actually stand on its own as a drop-it-in
block of code, not reliant on tricks or external dependencies. When I want to
refactor my first step is to inline code and data until it cannot be inlined
further, so that new approaches for reuse appear.

Simultaneously, I started aiming more and more precisely towards a higher
"level of completion" in abstracting. When I want to abstract heavily, I do
not reach for an abstraction-shaped-syntax within the language and make my
abstraction fit their abstraction; I write a compiler, and write the actual
thing I want, with the type system, internal behavior, and error checking I
want. And it takes a month or more, sometimes, but then I have something far
more valuable to my codebase than a general purpose tool. This does not happen
often; the compilers exist as part of an ecosystem. Most of the things that
seem "DSL-like" can be couched in terms of reconfiguring an existing compiler
around new APIs.

I have very few debugging problems within application code; most of my time is
spent either on data modelling, or compiler maintenance.

We do not know what good code looks like. We do not know what bad code looks
like. We know only that we are progressing slowly or encountering a lot of
bugs, and some of those issues are related to the shape of the code.

~~~
chriswarbo
> When I want to refactor my first step is to inline code and data until it
> cannot be inlined further, so that new approaches for reuse appear.

I do a similar thing. My first attempt is usually full of layers of helper
functions and not-quite-right abstractions as I was simultaneously trying to
understand and solve the problem.

Once I've got something that works, I refactor it to collapse these layers and
remove the bias introduced by my previous fumbling. Inlining reduces external
references, and point-free style eliminates local references (
[https://wiki.haskell.org/Pointfree](https://wiki.haskell.org/Pointfree) ).

After such a refactoring, the code is dealing much more with inescapable
features of the problem, rather than fighting against itself. At this point, I
can do a final cleanup to introduce abstractions to reduce redundancy and aid
understanding.

------
niftich
Good post that laments how we adopt different programming paradigms all in the
name of solving the problem more elegantly and modeling the problem more
purely, but we inevitably get bogged-down in some language- or platform-
specific quagmire or bikeshed; all for what?

To answer the author's question, I believe we constantly hunt for a 'better'
abstraction because we want to make code reusable, as-in, callable as an
independent unit or copy-pasteable to somewhere else. In truth, we rarely do
this; instead we build something new, or build on top of it, climbing even
higher on the ladder.

It's also intellectual golf. A _lot_ of software is in fact written to be just
good enough and barely working; this duct tape runs most our non-mission-
critical, line-of-business computing. This level of preoccupation with
modeling only occurs when the pressures to deliver aren't sufficiently great
to preclude these explorations to take place.

~~~
MaulingMonkey
I get interested in this kind of stuff in an attempt to avoid the "barely
working" quagmire, which has it's own problems (namely, ongoing maintenance
when it stops working.) Less for reuse - what abstractions will help me avoid
bugs? Which abstractions can simplify this and make it faster to write or
modify? Which will be easiest to read and thus maintain? It's rare - although
not impossible - that the answer to any of these things is "callback hell".

"Reusable" just sounds too similar to "YAGNI violation" to me these days.

------
ap22213
Hasn't there always been a Mathematical tug of war between theorists and
practitioners? Theorists get big dopamine responses when discovering new
abstractions and models. Practitioners get similar responses when solving
real-world problems.

As long as the theorists don't run too far away, the practitioners are happy.
And, theorists are happy as long as they have enough food, water, and shelter
to allow them to continue to think about stuff. :-)

As a (mostly) practitioner, I don't feel that a lot of the abstractions that
the author identifies are helping me write better code faster. For instance,
90% of my time is taken up by just a few things:

\- Transforming data from one structure to another.

\- Moving data around.

\- Writing and re-writing similar business rules and domain logic in different
libraries and languages.

\- Writing 'adapters' to get one library to work with another.

\- Trying to find 'the right' data structure or algorithm.

\- Discovering that the many different implementations of the same data
structure or algorithm in different languages don't 'do' the one thing that I
need them to do (and having to write my own implementation).

\- Reading other people's code.

\- Debugging other people's code.

~~~
svanderbleek
Actually the abstractions they are talking about are explicitly for most of
your use cases:

\- Transforming data from one structure to another.

\- Moving data around.

\- Writing and re-writing similar business rules and domain logic in different
libraries and languages.

\- Writing 'adapters' to get one library to work with another.

\- Trying to find 'the right' data structure or algorithm.

There are answers to these problems using mathematical structures common in
Haskell: Functor, Applicative, Monad, and so on.

------
pka
_I just really hope I’m not wasting my time._

I think this is very easy verify: just climb down the abstraction ladder and
try to solve some particular problem without those abstractions. If you then
find yourself yearning for monads, macros or whatever else, then no - you
haven't.

One particular thing that stuck with me during the past few months was the
continuation monad/coroutines abstraction PureScript's Thermite library is
built on. I happened to work on a project with another colleague, we were both
writing some UI stuff and while he was fighting with callback hell I was
happily writing sequentially looking code in the Aff monad.

It's not a particularly heavy abstraction, but something that can make your
life _so_ much easier. Most abstractions I use tend to fall into that uhm...
category :)

~~~
MaulingMonkey
> I think this is very easy verify: just climb down the abstraction ladder and
> try to solve some particular problem without those abstractions. If you then
> find yourself yearning for monads, macros or whatever else, then no - you
> haven't.

I yearn for things that don't actually make me more productive at times.

"Will this save more time than it costs?"

Lots of possible things influencing that to forget or poorly estimate.

You're right that going back to basics and seeing the pain points that arise
(or go away!) is informative, however.

------
pbohun
I used to enthusiastically climb the ladder of abstraction. Learning and
applying advanced concepts made me feel smart, plus they were _so cool_.
However, I found I never actually reused my super-awesome-advanced code. In
fact, it was just more compact, less legible code, like APL.

I went back to the basics. I started writing very simple, short, non
abstracted libraries/modules that did one thing well, in true UNIX fashion.
Now I have legible code, to myself and others, and I _actually_ reuse it. My
applications are now built out of battle-tested reusable components, and my
productivity is higher.

------
eggy
Abstraction is necessary, but all is in context. If you take a week to come up
with a better abstraction, but the product is delayed with no real benefit
other than the purity of your abstraction, then the process has failed.

I like this quote by Teddy Roosevelt: "Do what you can, with what you have,
where you are." The antidote to "paralysis by too much analysis".

I've worked in some high pressure jobs, and I don't mean coding (like
underwater emergency repairs), and even if you know what you are about to do
is not the optimal solution, but it gets the job done in a decent way, there's
no choice. You do it that way. And you get better at predicting how much you
can really afford to abstract away on future jobs.

Now in coding, if you truly enjoy the process, and the academics of it, you
should consider going into academia. Tools are later produced from the fruits
of that research. In the business world, you need to be able to deliver a
product that works, is solid and does the job.

I sympathize, because I spend all of my free time studying. I enjoy getting in
deeper and deeper to mathematics and coding, mainly math. I started in the
applied world, and ventured off into the abstract world. I love it. I wouldn't
want a job with it though, since I would not be able to stop myself from
following the trail, blind to the objectives of the job.

------
larve
The code I come back to when I want to see an elegant combination of advanced
concepts and exhilaratingly simple code is Peter Norvig's "Paradigms of
Artificial Intelligence Programming". While it is common lisp, most of the
code stays far away from the more abstract parts of the language (CLOS), and
uses macros judiciously.

I think you can get most of the way there by using simple concepts of your
programming language (functions, classes), but I feel that concise and simple
to understand things are more easily implemented by one or two well chosen
macros.

I so often long to have 2 or 3 nice macros to do really tedious stuff in my
day job (C++), like:

\- dispatching states in a state machine, with logging and safety checking.
While I can do most of the logging and a fair amount of sanity checking in C++
with a few ugly macros converting symbols to strings, I can't easily do
coverage checking at compile time, which would be very easy to do if i had a
"defstate" and a "defevent" macro

\- parsing and validating json by pretty much just describing the resulting
struct, and letting the compiler do the rest. again, I have a literal army of
JSON_STRICT_DECODE_DOUBLE etc... methods, but none of that allows me to
generate a protocol documentation at compile time.

Honestly 50% of the boilerplate in the application comes from these two
problems (I do embedded programming), but it's a pattern that I have when I
was doing web development too. Adding decent logging and error checking is
really hard to do save for adding language constructs, and in the kind of
down-to-earth programming I do, it's about 80% of what I do as a programmer.

------
dsego
Solving the general problem, illustrated well by xkcd:
[https://xkcd.com/974/](https://xkcd.com/974/)

~~~
eggy
I love that one, because I know at least two people that make me feel like the
guy who just wants his salt. Sometimes nobody will need it in the long run, or
nobody is paying for it in the short run.

------
pron
1\. Software development is such a complex process, that there is no way to
know a priori whether more or less abstraction is "better" in every
circumstance. There is also no way to know a priori _what_ kind abstractions
are "better" or "worse". The only way to know is through empirical study.

2\. There _is_ an inherent reason[1] why software development is so complex,
and it is the easy answer to that question that makes the answer to the first
question so hard.

[1]: [http://blog.paralleluniverse.co/2016/07/23/correctness-
and-c...](http://blog.paralleluniverse.co/2016/07/23/correctness-and-
complexity/)

------
tome
> Haskell is too limiting—the compiler cannot deduce constraints I can solve
> in my head in seconds

Alex, if you could do a post with concrete examples of these that would be
_really_ helpful!

~~~
notjack
*Alexis, not Alex

~~~
tome
Thanks. Can't edit any more though.

------
userbinator
_This is a bit of a cheap example, given that Java getters and setters are
something of a programming language punching bag at this point, but I really
did write them, and I really did get frustrated by them!_

...then don't write them if you don't need them. It's perfectly possible to
write relatively concise and straightforward Java if you just ignore the
massive quantities of dogmatic indoctrination effluent that abounds in that
culture. Not everything needs to be an object. Not everything needs to have an
API. Not every "best practice" is to be followed. Not every new feature needs
to be used. I have some pretty amusing stories about that from the time I
(briefly) worked in Enterprise Java...

 _Yet very few new programs are being written in BASIC, and lots are being
written in Haskell._

You would be surprised how much Visual Basic and its variants are still being
used.

That said, I never did "climb the ladder of abstraction" much, because the few
times I tried, there just didn't seem to be any clear advantage in terms of
productivity. I don't consider abstraction as something to be applied whenever
possible; it's closer to a last-resort method used only when other approaches
to simplifying code fail. And as programmers like Fabrice Bellard show,
abstraction != productivity (he works mainly in C.) As someone who also stays
relatively low in terms of abstraction levels, may I suggest the author of
this article try Asm ;-)

To paraphrase someone in the demoscene who shares much the same viewpoint,
"Some people look at a problem and see objects, models, layers, APIs, and
other countless abstractions. I look at a problem and see a machine executing
instructions to solve it."

------
mannykannot
It is refreshing to read an article on this topic in which the author is
reflective and equivocal about what he is doing.

I wonder if part of the issue is that we are going through a period of rapid
evolution in programming, in which more formal concepts of abstraction are
being experimented with. It will take a while before it becomes clear what
works well.

It is commonly observed that programming well is harder than it looks. Perhaps
resolving that dissonance will inevitably lead to languages that reveal the
underlying complexity and force the programmer to address it before it is
revealed in testing or use.

Not that this is necessary in all areas of programming: I agree with those who
say that these techniques are probably overkill for most business
applications, where robust simplicity is often more desirable than maximal
abstraction.

------
skybrian
Since Go is fairly abstraction-resistant, you end up declaring API's for
concrete things.

For example, there is not much point in writing a generic Set class. There are
lots of packages available but they tend to solve concrete, domain-specific
problems. The difference between this and the typical Haskell library where
you need to be mathematically sophisticated to even know what problem it
solves is pretty stark.

It would not be my favorite language for writing a compiler for an evolving
language. But for scripts and simple servers it's pretty good.

~~~
codygman
> The difference between this and the typical Haskell library where you need
> to be mathematically sophisticated to even know what problem it solves is
> pretty stark.

I'm a Haskell developer and the most math I know is algebra 2. I also don't
have a degree. I write Go for work. You don't need to know much math for
Haskell. Go's lack of generics is a HUGE mistake and bites my teammates and I
often.

------
CyberDildonics
What always seems to get lost is that people focus on the process and not the
data they are working with. If you focus on what the data is and what it needs
to be you can avoid a huge amount of abstraction.

Most of the time when with this approach at some point you can see interfaces
that can apply to multiple types of data that you are working with, but in my
experience this doesn't come at the beginning of the the process of writing a
program, it comes somewhere in the middle.

