
Ask HN: Do you mostly program in a non-FP or non-OO style? Why? - open-source-ux
Functional and object-oriented styles of programming dominate programming languages today. Are you using a language that has neither of these features? If so, what is the language and why do you use it?<p>Or are you using an FP or OOP language but rarely use the FP or OOP features? If so, why?
======
falcolas
Primarily a Python and Go programmer.

OO: Only rarely, and only where it really makes sense. Most programs I work on
are not complex enough to justify the overhead (boilerplate, cognitive).

FP: I use this more, but again, only when I can really justify the added
cognitive overhead.

You'll notice the two references to cognitive overhead: I have adapted the
"write for a 6th grade level" idiom into programming. I want as many people to
be able to pick up the code and modify it as possibly can. I work with
everything from interns to decades plus veterans, and my code must be grok-
able by both. Pure FP tends to confuse interns, and veterans have a tendency
to hate the layers and layers of indirection and abstraction OO brings into a
program.

Normal and boring old "an imperative main method with functions" tends to be
the least offensive and most understandable to all parties. Compromise r fun.

~~~
qorrect
Programming for the lowest common denominator? Something about that sounds
wrong.

I generally agree with "use only as much cognitive overhead as needed to get
the job done" ( I'm enjoying that term ) - but it seems like a slippery slope
that could slowly bring everyones abilities down to the least capable
developer.

~~~
stouset
It's also an excuse to avoid any sort of abstraction whatsoever, which makes
solutions to genuinely difficult problems impossible to read, since one has to
wade through the low-level details at every step.

To me, it's a bit like telling a person how to get from point A to point B by
telling them when, how hard, and for how long to step on the gas and what
angles to put the steering wheel at. Sure, that might work to get someone out
the driveway, but good luck getting to the grocery store that way.

I will genuinely never understand the go philosophy that "excessive"
abstraction is bad, with no attempt to justify why _current_ levels of
abstraction — which are charitably hundreds of times more complicated than the
difference between, say, go and ruby — are good, right, and just. Everything
we do on a computer from processing to memory access to networked
communications to rendering to handling input to relational data modeling to…
_anything_ is dozens of layers away from what's actually happening. But
somehow right now we're at the optimal level of abstraction and any more would
just be _too much_. Okay.

~~~
Veedrac
> no attempt to justify why _current_ levels of abstraction — which are
> charitably hundreds of times more complicated than the difference between,
> say, go and ruby — are good, right, and just

It seems to me that Go is part of a reaction from people who are _not_ happy
with current levels of abstraction.

~~~
dbaupp
I think the grandparent means that Go is already very abstracted from what's
actually happening (e.g. programmers don't have to think about quantum
mechanics, voltages, machine code (i.e. the actual numbers), microcode, CPU
caches, virtual memory, finite memory/memory management (GC), implementing
runtime type-checks or virtual dispatch) like many other languages. On one of
those "spectrum" diagrams everything is bunched really far to the right:

    
    
                                                 asm    Go Ruby SQL
      reality <---------------------------------------------------> abstract
    

Their point is something along the lines of: there's not as much justification
for why all that stuff to the left is "good" abstraction, versus the
justification for why the tiny extra bit to the right is "bad". The line
between good/bad abstraction seems to be fairly arbitrary.

~~~
Veedrac
This feels like a false comparison to me. When you write ASM you pretty much
know what it's going to do.

When you ADD two registers, you really are making two reads in the register
file, which _really is_ a piece of physical hardware, and those reads _really
do_ go to a physical adder and get written back to the real, physical register
file. And languages like Go translate reasonably (but not exceptionally)
straightforwardly to assembly. Even if you don't know all of the details that
go into making it fast, your first guess of _what_ it's doing is pretty much
right.

In contrast, adding some lazy FP callback into your latest node.js framework
on DOM objects might be doing something, somewhere, but who on earth really
knows?

~~~
stouset
> This feels like a false comparison to me. When you write ASM you pretty much
> know what it's going to do.

This is because it's an excellent layer of abstraction on top of the machine
code which is interpreted by the abstraction of microcode which sends it to
the abstraction of a physical processor that's itself just an abstraction on
top of transistors that themselves are an abstraction around manipulating
voltages on a complex network of circuits.

Nobody bats an eye at these levels of abstraction. And they're not perfect!
Modern CPUs have _hundreds_ of errata. Skylake's alone is almost 40 pages
long, at 3-4 per page.

But you take them for granted because they're several layers lower than what
you have to deal with on a regular basis.

Even your example of adding two registers required compiling through an
assembler, scheduling execution by a time-sharing operating system that fakes
the concept of running hundreds of parallel processes by rapidly looping
through running processes, and on and on and on.

We are awash in a sea of abstractions more deep and complex than even a dozen
of the world's best engineers put together could hope to fully understand. And
yet people earnestly defend language design decisions that prevent a single
function from comparing two values of any arbitrary numeric type as being too
complex, when issues like that are less than a hundredth of a hundredth of a
hundredth of a percent of the complexity that are modern computers.

~~~
Veedrac
Electronics Weekly says[1]

> “The M0 is a third of the size of the M3 in its minimal configuration,” ARM
> CPU product manager Dr Dominic Pajak told EW – 12,000 against 43,000 gates.

That same page is 2.8MB _compressed_ , the half of which is Javascript
decompresses to 2.6MB.

Yes, a big OoO core has a lot more to throw around, but the vast majority is
spent on tricks to make things go faster. The layers are thinner than you
expect, and they're built that way on purpose. The hardware below assembly is
a _far_ smaller jump than the browser in the sky.

[1]:
[https://www.electronicsweekly.com/news/products/micros/arms-...](https://www.electronicsweekly.com/news/products/micros/arms-
cortex-m0-processor-how-it-works-2009-03/)

~~~
stouset
Over two-thirds of that is ad-related code. The HTML, CSS, and basic
Javascript needed to run the site appears to be on the order of < 250KiB.
Images obviously bump that number higher.

You are not arguing against abstraction. You're arguing against the user-
hostile influence of advertising on delivery of content on the web.

~~~
Veedrac
And you think adblockers have a legitimate reason for their codebase to be 3x
the size of DOOM? If all this abstraction from the low level C code of DOOM to
the high level Javascript-on-the-browser bought us anything, you'd think the
abstract one that is solving an easier problem would take _less_ code, not
more.

~~~
Veedrac
s/adblockers/adverts and trackers

------
llogiq
Data scientist/engineer here. I write Java by day, Rust by night and in both I
embrace a data-driven design, where I first carefully lay out the data so that
it's easy to use within the code. In hot loops, I avoid dynamic dispatch for
its runtime cost.

I don't embrace the FP style either because while I generally limit mutation
(`final` almost everywhere), I use mutation where it leads to code that is
easy to understand, which would be exceedingly "clever" in FP style.

------
hyperpallium
I mostly use a procedural style (i.e. C like) in java, for working out new
ideas in solo projects.

It's simpler, more flexible, less verbose and easier to follow than a full-on
OO style. A great example is the early calculatoe e.g. in ed 2 vs 3 of the
compiler "Dragon" book: older uses C, later uses OO java... and it's so much
worse.

However, OO is great for wrapping up modules of functionality for which you've
understood and settled on an informed architecture (or you just need to sweep
repetitive boilerplate away).

I think inheritance is just about completely useless (but not quite
completely), and (java) interfaces are great - if you have more than one
implementation.

Big, multi-person projects are a different story.

I don't use much fp-style, except where recursion is natural (e.g. parser
combinators); or for plug-in functions (hardly fp though).

~~~
morbidhawk
I was writing a small but semi algorithm-heavy library in C# that didn't rely
much on other libraries. I later decided to port it to Java and realized that
the more C#-like I had written the original code the harder it was to port.
Later when trying to port it to JavaScript I ended up re-writing the C#/Java
code to rely less on the standard libraries since each standard library
differed so much. At that point my code started to look very procedural in the
OO languages possibly similar to what you've been doing in Java.

I came to the realization I didn't even need a lot of the language features
for what I was doing and decided to switch it to C to not have to maintain
various versions of the library and it was a steep learning curve but I've
come to really enjoy the simplicity but powerful capabilities of C.

~~~
le-mark
Thanks for sharing this, it's a great, succinct example of the journey many
experience. You could flesh this out and post it on medium or something to get
the ideas out there for discussion.

------
analog31
Procedural programming is what I learned first (Pascal), and I trust myself to
write good code in a procedural style.

I do some microcontroller programming, and it's usually straight C. The
hardware registers are all global, and their contents change based on external
stimuli, so that kind of rules out the idea of stateless programming.

The earlier versions of Visual Basic had kind of a compromise, where it came
with a lot of pre-made objects, and you could create objects if you got the
special kit, but the casual programmer was only expected to _use_ objects. I
kind of adhere to that idea when I write in OO languages such as Python.

I use OO sparingly when programming Python, often to encapsulate hardware
functionality, but then use those objects within programs that still look
procedural.

I avoid using inheritance, mainly because I don't trust myself to do it in a
maintainable way.

------
Animats
I've written almost entirely object-oriented code for two decades. My programs
have almost no global variables.

I've tried functional programming in Rust, but it's not going well.[1] I don't
like the
"x.and_then(|foo|).if_even(|bar|).except_on_alternate_tuesdays(|baz|)" style.
Rolling your own control structures is not good for readability.

[1] [https://github.com/John-Nagle/rust-
rssclient/blob/master/src...](https://github.com/John-Nagle/rust-
rssclient/blob/master/src/wordwrap.rs)

------
nnq
1\. "Shallow" OOP - minimal use of inheritance, composition ok but without
making a Russian doll with 7+ layers with it.

2\. "Grand scale Functional characteristics" \- exposed API should try and
have "referential transparency" and "composability" (you can write systems
with "functional properties" in languages like PHP just fine btw.)... find not
much benefit from "small scale / low level" FP.

3\. Wrap stateful algorithms & other code rich in mutable variables in either
(a) referentiaally transparent functions or (b) shallow objects that make it
obvious how and when state changes.

4\. Avoid the "islands of functional purity in a sea of objects" pattern like
the plague - it results in large scale intellectual masturbation at the small
scale, and incomprehensible systems at a large scale, your monadic fantasy is
useless when stuck inside the method of a 8-levels-inherited monster object...
_functional is important on a whole-system level, a 10 line method is easy to
understand even if it mutates local variables all over the place_

 _In theory_ FP would be great at all scales when coupled with a good type
system... but I've never got to work on projects in languages like Haskell or
Scala and I'm not sure _I could handle_ the cognitive overhead of it.

Oh, and _don 't use Exceptions, ever!_

~~~
nnq
_On programming languages:_ It also doesn't matter whether _the language_ has
such a good support for functional programming when you don't care about doing
in _on the small, inside inner-inner-function level_ , so `map`, `foldr`,
functor or whatever... make no much difference. As long as you have _the
basics_ , like "first class functions" and "lexical closures", you can do
"large-scale functional" programming in languages like Go just fine. It
actually feels more refreshing in a minimalist language like this and with a
minimalist and explicit type system :)

------
zaptheimpaler
I use Scala - it can be used or abused to do advanced OOP and advanced FP.

I tend to see programming as writing. The style of writing depends very much
on the context and the audience.

I lean towards "basic" FP (immutability/pure functions/composition) but none
of the heavy category theory concepts (anything with types that are too
complex). Its mostly procedural with a hint of FP.

Most things should be functions, most higher level things (classes/packages)
should be primarily ways to bundle related functions.

Prefer to maintain a clear separation between data & operations on data or
structures imposed on top of it. I dislike the OOP approach of bundling data +
transformations together. Also dislike inheritance because that is baking one
particular structure into the definition of data.

------
morbidhawk
My preference recently for side projects has been imperative/procedural style
of programming. I prefer it to OO and FP hands down. To me code is so much
easier to read when you don't have clever abstractions everywhere, most code
reads line by line and isn't trying to hide anything. I think there is a
tradeoff between concise terse code you find in OO/FP that reuses other code
vs the long step-by-step procedural code. For me personally I'd prefer longer
code that is not reusing a bunch of other functions.

It's almost a weird paradox: OO/FP focuses on reusing code yet
imperative/procedural code (ie: C) does not reuse code but is itself the most
reusable code. You can't have your cake and eat it too I guess.

~~~
jstimpfle
Reusable code may be not the best code. There is probably a lot of redundancy
(cross-cutting concerns). Or alternatively, increased interface complexity, to
the point where it's easier to just write a custom-tailored version of the
code.

I still like to program in C and make my own thin abstractions / "runtime",
tailored to the task at hand, and using only minimal dependencies. That way
it's much easier to have only _essential_ dependencies, not _accidental_ ones.
It's such a relief seeing a couple thousand lines of cleanly modularized C
code compile almost instantly.

For example, by putting a char * into a struct definition a dependency on a
more clever string type can be avoided. All that it takes is delegating memory
management decisions to another place. Which is actually very beneficial,
since most code shouldn't be concerned with mutation or especially allocation.

~~~
platz
redundancy/duplication is not a boon to maintainability admitting a revolving
door of co-authors.

------
itwy
PHP. Good old procedural PHP. Cleaner than most OOP and FP projects in the
wild and I can code large things quickly and easily. The code is very
maintainable as well. I even avoid PHP's OOP solutions with the exception of
PDO which I wrap in procedural functions.

------
messe
Physics student here. I write a lot of python scripts/notebooks, which tend to
be very imperative in style. They're usually just one-off programs, and
there's little in them to be abstracted out. I rarely drop into C, as
numba+scipy is usually fast enough.

Other than that, I use Mathematica quite a bit for playing around with ideas.
I've tried sympy, and I hoped to switch to it but it's just not as fluid and
integrated.

If I'm coding for fun I tend to come back to FP and OO, I'll use either
Haskell, Python or CL. I've been meaning to make something using Rust for a
while. I'm planning on building a fermentation chamber (and perhaps a
kegerator depending on my budget) for homebrewing at some point over the next
year; I'm thinking that I might use Rust for the temperature controller.

~~~
glup
Similar case here as a cognitive science PhD student... analyses (Jupyter
notebooks) are basically imperative but if I need to write a library to
support the analysis (which I almost always need to do) then OO.

------
fpoling
When I have an option, I program in a procedural style. The code is easier to
maintain and follow this way.

For side projects and for small Python code at work recently I have been
applying data-oriented programming patterns. So far they worked surprisingly
well.

Perhaps this is because one have an overview of the whole program state that
is not hidden behind multiple layers of OOP abstractions and there is no mix
between data and code, as happens with functional style.

------
ajarmst
OOP wasn't really yet a thing and GUIs (which I think of as the killer app for
OO) were still experimental when I was first learning programming. I started
out with Fortran and Pascal on a timeshared mainframe. Probably because of
this background, I still find OO languages ungainly, especially the huge
libraries that are usually associated. I'm also a bit of a skeptic about the
purported value of the OO abstraction, which always seems to cherry-pick from
a few problem classes that lend themselves to representation as interacting
objects.

While I sometimes teach programming in C# and C++, my main responsibility is
our Algorithms and Data Structures courses, for which we've kept using plain
old C. From a teaching perspective, it's a great language for teaching
fundamental algorithms and data structures. The students are also able to
leverage that work into later courses on microcontrollers and embedded
systems.

For my own work, I tend to mostly be writing code for somewhat idiosyncratic
one-off data processing (I'm currently working on a genetic algorithm for
class, student and professor scheduling) or embedded systems. The former I
generally work in Common Lisp (SBCL)(although I do keep promising myself to
learn enough Clojure to see if I prefer it). This is probably because of the
way I think, as I tend to be comfortable building a system from bottom up,
moving the data representation from the general facts I start with toward a
representation that meets my needs. I actually came to CL pretty late, about a
dozen years ago, but it really seems to suit me, possibly because I'm still
primarily a command-line person. Being an emacs user certainly was a strong
influence, too, although I don't hack elisp very much. I should add that I do
often use CLOS, especially when working with more structured data, so I do use
_some_ OO, although it's mostly just to provide a more convenient interface to
some complicated data type.

For embedded development, I still work almost exclusively in assembler or C,
with less assembler every year. I tend to be working on pretty low-powered
special-purpose devices, so code reuse and robust interfaces don't add much
value, but close control of exactly what the hardware is doing does. This was
normal in that domain until comparitively recently, but the power of even
cheap devices and the availability of libraries mean that there are a lot more
options. I expect that I'll continue to work mostly in C out of familiarity
and inertia, but I will admit to having recently bought some Lua books with
the intent of trying it with ESP8266.

~~~
lj3
> which I think of as the killer app for OO

Was this written in a book somewhere? I keep hearing it from people, but
nobody's able to really explain why. OO is terrible for GUIs IMHO.

~~~
ajarmst
GUIs lend themselves very well to an object abstraction. The discrete elements
(dialogues, menus, controls, etc) are objects that often inherit behaviour
from classes of objects (i.e. A modal dialog is a generic dialog is a form is
a ...). Events are messages between objects. We even see things like
polymorphism eg. anything can get a "click on" message, but different objects
behave differently when clicked.

------
5ilv3r
I try to only use capabilities that are common to all languages. Some basic
math, basic conditionals, and maybe poking a file or memory address somewhere.
I don't give a enough of a crap to get excited for new but short-lived
features anymore.

~~~
mlok
Same for me. Lately I was wishing there would be a translation matrix for all
these basics in every language.

~~~
frou_dh
A start: [http://hyperpolyglot.org/](http://hyperpolyglot.org/)

------
nnfy
Personally, I learned to code in an oop style. I cut my teeth on CPP and then
a few years of c#, but these days I work in python.

My problem with non OOP in a language like python is that I have trouble
scaling large apps and interacting with large libraries, because without type
information enforced by the compiler, there is too much left to my own working
memory, compared to a strongly types language built around OOP.

What's more, without rigid encapsulation I feel im exposed to an excess of
internal code any time I need to look up a parameter or kwarg. Never mind the
trouble of navigating five files feel through aliased method names...

As examples, consider matplotlib or tensorflow. Extremely easy to use once you
know where everything is and what params to pass where, but I can't help but
feel that it would be easier with stronger OOP.

I feel like I have not yet learned to think in a pythonic manner, but after a
few years I'm not sure if I every well. As an aside, I just dont understand
how python doesn't cause problems for devs when it scales...

------
gt565k
I primarily use a declarative language called LogiQL, which is a superset of
Datalog. It's basically logic programming. Using predicates and rules.

We have a proprietary database called LogicBlox, that allows us to push all
business logic down to the database layer, and almost entirely avoid having a
service layer, or at least a very lean one. This allows us to address
efficiency problems purely by optimizing database joins between predicates
(tables).

Our data is highly normalized (6NF).

[http://www.logicblox.com/learn/](http://www.logicblox.com/learn/)

------
lgas
FP. Because it makes most things easier while resulting in software that works
more consistently.

------
rasjani
Last project I did was mainly procedural and that's my default go-to type of
structure but OO has its place too. For example in that last one, i had few
objects that implemented same functionality but with different data structure.
So if I wanted to later on add new data structure, just adding a new class
with predefined and partly dynamic features, I'd only need to implement a new
class.

For data iteration, i prefer functional style (even in python, even after
years list comprehension just feels so wrong).

------
imh
SQL is neither FP nor OO! It's not a general purpose language though, so it
may not be a great answer. It's still amazingly useful and incredibly
prevalent.

------
cratermoon
My current gig has me writing Go, which doesn't do much more than wave a hand
at OO to start with, and the codebase I'm working with is particularly un-OO.

------
tscs37
I mainly write in Go. My adopted approach is to be pragmatic about how I
approach my problems.

I kinda tend to start with a OOP style for the data models and a small shake
of FP for the functions but change this according to my needs.

There is no one true style for programming, there is only tools for a task and
using the best or good enough tool is the only important thing. (Though
sometimes it's fun to use the wrong tool and see how it goes)

------
jstewartmobile
I strive for FP, but since most external libraries are OOP, I've made my peace
with it.

That's probably not the right question though. If we're just focusing on
implementation, using the right data structures for the problem would have to
be job #1.

Screw that up, and everything else (APIs, UIs, enhancements, training, you
name it) is going to be an uphill climb.

------
seanmcdirmid
I primarily use nouns for the macro architecture but use lots of verbs in the
details.

Neither style especially dominates, and most languages have enough support for
both even if the tilt one way or the other. Most people pigeon whole other
styles into one or the other, sometimes ambiguously (e.g. actors get counted
as both OOP and FP, relational also has that problem).

------
k__
I mostly code in JavaScript.

Try to use FP most of the time, but sometimes a little mutation seems easier
to grasp.

------
throwaway7645
I primarily use Python scripts in my day job to parse data, automate job runs,
analyze data...etc.

Rarely is anything complicated enough to abstract without making the code
longer and more complex. Anybody can follow imperative code. OO is more
difficult.

------
leksak
C

~~~
frou_dh
Starting with C has made me consider it a travesty whenever seeing a language
where "main" is forced into being a method on a class.

OOP as a technique is fine, but any language disallowing freestanding
functions is perverted, I tells ya.

~~~
leksak
I agree. I prefer pragmatic languages as opposed to dogmatic ones which is
probably why I'm friends with Kotlin

------
d1ffuz0r
Python with tricks from Erlang, PHP frameworks, Lisp. Do FP for most of
things, using OOP to structure the code when reuse is necessary: e.g.
connectors to external APIs

------
pacala
Scala[!] NLP/symbolic math.

------
KirinDave
Even in OO languages, I'm almost exclusively in an FP style because that style
focuses on minimizing redundant code (e.g., don't write loops write loop
transformer functions), minimizing redundant top level functions
(segmentations, hand-tuned stateful getter/setters, etc), and maximizing the
opportunity for correctness.

The last part is pretty important because properly testing software that talks
to an external service is essentially impossible. Either you test in an
integration setting where repeatable results are difficult to maintain, or you
mock out your services and then you're testing your opinions against other
opinions.

Here's an example of how I push this style to typescript, heavily redacted to
remove company-specific marks, names and strutures: (this code is simple by
intention; we may have a jr dev on the team soon and an excessive number of
generator abstractions could help with this process but would cause
readability issues).

    
    
        function splitOrders(orders: e.OrderResult[]): {[key:string]: e.OrderResult[]} {
            return orders.reduce(
                (accum, borders) => {
                    // I sure would love a better idiom for non-destructive map updates but
                    // this isn't my primary language. Suggestions welcome!
                    let next = {...accum};
                    next[borders.datatype] = (next[borders.datatype] || []).concat([borders])
                    return next
                }, {})
        }
    
        export async function undoifyUserProcess(env: any,
                                                tolkien: client.tolkien,
                                                token: client.APIToken,
                                                spotData: IThingData): Promise<any> {
    
            const legendaryCustomer = await e.getCustomerById(env, spotData.customer_id);
            const orderResults = await e.findThingContractDetails(env, +spotData.spot_id);
            const workSchedule = splitOrders(orderResults);
    
            
            let btasks = scheduler.undoifyCustomerFromOrders(
                env, 
                token, 
                { key: legendaryCustomer.userKey }, 
                workSchedule['newstyle'].map((wi => wi.bigId)));
    
            let ltasks = (workSchedule['legacy'] || []).map(async (workItem) => {
                // .. process elided but...
                const nodeKey, groupKey = ["things", "were", "removed"];
                const customerEmail = legendaryCustomer.some_field + "removed work" 
                // Factoring this out would have introduced a function with a bunch of parameters
                // or state that is only synthesized here, so we left it embedded & closed over.
                try {
                    return await omg.undoifyUserWithTagGroup(env, tolkien, customerEmail, nodeKey, groupKey);
                } catch(e) {
                    console.log(`... malformed database call your mom for help ...`);
                    throw e;
                }
            })
    
            try { 
                const work: Promise<any>[] = [btasks, ...ltasks];
                const result = await Promise.all(work);
                return await e.recordUndoifyingThing(env, +spotData.spot_id);
            } catch(e) {
                console.log(`...`);
            }
        }
    
    

If a language can't express a decently abstract functional style, I do my best
to avoid it. Still, I've had to write a bit of Golang at my current job
despite its utter lack of descriptive capabilities. It's poignantly
frustrating to deal with, but a good reminder of how bad life used to be for
programmers.

~~~
solipsism
_or you mock out your services and then you 're testing your opinions against
other opinions._

I'm curious what you mean. Mocking out the dependencies of a piece of code
allows you to test that code exhaustively (for every range of possible inputs)
if you choose. Even when that's not practical (probably most of the time), it
allows you to do your best. It gives you the hooks to simulate any range of
behavior.

 _and maximizing the opportunity for correctness.

The last part is pretty important because properly testing software that talks
to an external service is essentially impossible_

It sounds like you're just trying your best to write _correct_ code and then
hoping for the best? This absolutely will not scale, and it sounds downright
frivolous.

Still, I would love to know more about where you're coming from. Do you do
anything to ensure correctness aside from maximizing your hope of writing it
in the first place?

~~~
KirinDave
First of all, Tome4/DND reference?

> Mocking out the dependencies of a piece of code allows you to test that code
> exhaustively (for every range of possible inputs) if you choose. Even when
> that's not practical (probably most of the time), it allows you to do your
> best. It gives you the hooks to simulate any range of behavior.

It lets you mock anything at all, which is not what you want to mock. You want
to mock the service (and in a microservices architecture, its interplay with
dependencies).

If you can encode those into a series of constant replies? Good, but you'd
better be doing something like FSharp's type providers or Facebook/Marlow's
Haxl where you capture real traffic off the wire, otherwise all you've done is
inject more opinion and conjecture into your tests and confuse that with
correctness.

Outside of a very small number of specific methodologies, I think our industry
is at mostly a loss for how to test microservices.

> This absolutely will not scale, and it sounds downright frivolous.

It doesn't scale with junior engineers. But neither does unit testing, which
is more of the same "I think these tests and we try them and pretend that's
test coverage because this line of code was touched" which is demonstrably
false.

In typescript, I try to do queue-based architectures with full docker stack
simulation. People like to call these "integration" tests, but I consider them
unit tests for impure code.

In a language that lets me do pure/impure effect separation even in a communal
setting, I tend to use quickcheck for the pure parts and hand-test the impure
parts if an environment exists. When I get to use Haxl, I used their caching
features to preload responses and my code didn't even know it was being
tested, which was great.

Also, in Haskell, I'll sometimes use MonadMock but only if I can grab raw
source response from APIs (or of course I need to do something like fix time
or random numbers).

But I really try not to waste my time in the futile tarpit that is unit
testing anymore. It's sort of the pop quiz mentality applied to software
engineering and all it does is provide false confidence that code is "tested"
when the quality of the output still varies radically on the discipline of the
developer.

When I _do_ go back to Ruby or untyped javascript, I do end up writing unit
tests. But they're mostly to capture scope around components to ease
refactoring and make requirements explicit; something every language should
have.

Sorry for being so frank, but you seemed like you wanted a bigger discussion.
I think the industry has collectively decided to freeze all interest in any
technology conceived after 2007 and it bugs me.

