
Abstraction, the Anti-Pattern of Computation - RFVenter
https://rfventer.github.io/blog/0161008AbstactionAntiPattern/
======
clusmore
Abstractions hide _irrelevant_ details, heavy emphasis on the word
_irrelevant_. "Abstractions" over relevant details are bad abstractions.
Relevant is not an absolute term. Details are relevant or irrelevant in
different circumstances, so you can't concretely say whether any given
abstraction is good or bad - it depends on the context. But given that the
details actually are irrelevant, then abstractions _improve_ clarity, not
detract from it, because you are not hiding any information that would help to
clarify.

My favourite example is navigational directions. If you ask me for directions
to a particular building, I could say "It's the building on the left of the
XYZ building in the city." If you know where the XYZ building in the city is,
this is very clear and concise. It's much better than if I had instead said
"Okay well from here, go North for 100m, then turn left, then continue going
straight for 1km, then turn right, ..." If you don't know where the XYZ
building is, the latter directions are more helpful.

------
jonny_storm
In mathematics, abstraction is little more than substitution. This is what
Wittgenstein was referring to when he said all logical propositions are
tautologies--that all abstract notions can be reduced to compound statements
containing what he called atomic facts.

"Abstract" needn't mean "vague" or "fuzzy." Consider your favorite high-level
language, which necessarily compiles to machine code. The abstraction
introduced by the high-level language corresponds precisely (for a target
architecture) to a set of instructions that perform the intended function.
That it does not do so _uniquely_ is another matter, and perhaps this is the
sort of gap you are (understandably) concerned about.

The argument that abstraction is inherently bad is demonstrably false. (If
I've inadvertently constructed a strawman, I apologize, but that did seem to
be the argument put forth.) Must I understand the computations--and they are
computations!--occurring at the sub-atomic level to write a program? Certainly
not.

Indeed, I claim there is a limit to the amount of raw information we can
process effectively. Further, I claim that familiarity with a system or
concept correlates with the amount of irrelevant information we can
effectively categorize and ignore. I won't support these here, but I think
they are reasonable, if imprecise.

Perhaps, then, the right amount of knowledge is the right amount of knowledge.
At least some limited understanding of quantum physics is necessary to program
a quantum computer, or a simulation of one, but my failure to write a correct
or easily understood program need not precipitate replacing code with gates or
something similarly tedious. Likewise, doing away with subclasses or generics
or functions or structured programming constructs seems more an instance of
throwing the baby out with the bathwater--possibly for the wrong reasons in
the first place.

Tangentially, there's a wonderful correspondence between symbolic substitution
and proof reduction that Phil Wadler explains in his paper "Propositions as
Types," and there's a great talk he gives on the same. Here, abstraction is
more or less the opposite of beta-reduction, which is just substitution, all
of which is gloriously precise.

Wall of text aside, if the post's thesis was merely, "We should be more
careful," then I could not agree more.

------
atom-x
I tried, but the article lost my interest after several typos and terrible
usage of bold + italic, which makes it mostly unreadable.

------
flukus
I wish that was the level of abstraction I have to deal with. Instead there's
another 17 layers spanning multiple servers filled with "business logic".

