
Correctness – A paradigm for sustainable software development - chrisdaloisio
http://nonullpointers.com/posts/2019-03-27-correctness-the-paradigm-for-sustainable-software-development.html
======
gambler
_> The thing you need to look at if you’re using say, a dynamic language, or
object oriented design, is that in the long term, what is the language and
mindset of “objects” with dynamic dispatch providing you apart from an endless
stream of bugs that seem to keep reoccurring everytime you try an evolve the
software to introduce a new requirement?_

I am sick and tired of the arrogant, willfully ignorant developers touting
their FP horn and bashing OOP without any qualifiers. This is getting as
annoying as Java people posting their design pattern nonsense as if it was
something good.

If you know what a monad is, you should also know the founding ideas behind
OOP.

[http://userpage.fu-
berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay...](http://userpage.fu-
berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay_oop_en)

The ideas make sense. Moreover, Kay has an amazing track record of saying
unpopular things and being proven right in the long run. If you spend some
time looking into it, you should see that his high-level intuitions about
software architecture are proving to be correct, despite many of his critics.
A lot of hyped-up modern technologies are implicit, ad-hoc, overly complex
implementations of OOP ideas done without realizing they are such. The whole
IT industry would be far better off if people took time to look into _and
understand_ the original ideas behind OOP.

1\. OOP is a higher-level paradigm than FP, so people comparing them directly
usually are missing the point to begin with. OOP systems can be implemented in
functional languages or use functional-flavored components.

2\. It is beneficial to model _algorithms_ without using state. FP is very
good that that. (Boring but practical example: LINQ vs loops and variables in
C#. Functional LINQ pretty much always "wins", just like Streams always win in
Java.) However, large-scale systems almost always have state. If you do not
have some wise approach for handling it, your designs will have issues at
system level. OOP is one such approach and _at that level_ its ideas have a
lot of value.

I'll edit this post later to add a few more points.

~~~
thraway-burnout
This is a bit like the "Islam is a religion of peace" discourse. I'm sure it
is, but:

OOP as actually implemented and found in the wild isn't about Smalltalk or
message passing. It's about transforming functions into methods by
unnecessarily wrapping behaviour in classes, as an example of cargo-cult
programming.

OOP the idea is great. How do you actually find it in the world?

~~~
depressed
If functional programming were implemented as widely and abused as much as OOP
is, would it really fare any better?

~~~
ummonk
You can already see examples of this.

Take callback hell. Opaque callbacks are the functional equivalent of goto
statements.

Also, many functional programmers, particularly Haskell programmers, seem not
to care about readability and maintainability. They're focused on maximizing
their own productivity as an individual coder, and writing cool concise and
abstracted code that might not actually be easy for another engineer to
understand and modify.

The difference though is that good functional programming is still quite
common whereas good object oriented programming seems to be rare (the one
exception being microservices).

~~~
marmaduke
> Opaque callbacks are the functional equivalent of goto statements

in some sense they are worse because you can't just grep for the call site of
an anonymous closure, like you can for a goto, it could be almost anywhere in
the code base

------
agentultra
I had a thread on twitter about this problem of "musicians nerding over gear."
In our metaphor it was about mountain climbers. The programming language and
its paradigms chosen may have some effect on how you scale the mountain but
the real problem is the mountain. Whether you use functional programming or
dynamic typing it only affects small, local problems in the practice of
climbing mountains.

The problem of efficiently scaling mountains requires a broader scope. This is
why I think formal methods are starting to break into the mainstream: they
give us tools to use to simulate and verify designs (model checking, system
design validation), validate implementations (verification, proof), and
generate provably correct code from high level specifications (synthesis).

Grumpy programmers who think maths is for academics are going to miss out I
think. It's an exciting field and the cost for something that used to be only
used for extreme circumstances has been coming down so rapidly that it can be
effectively used in small businesses as well.

It all comes down to clarity: in communication, in thought, and in practice.

~~~
l_t
Interesting point. I think I've been feeling this lately as a desire to have,
basically, smarter compilers. It's actually incredible to me when I think
about how many minutes of human thought are wasted among myself and my peers
to informally verify facts about our programs.

Obviously, types and tests help. But it's astonishing how much incidental
complexity creeps into one's everyday work with our current tools. Under most
circumstances telling a computer to do something shouldn't be that much harder
than telling a human to do it (assuming a concrete grammar and shared
vocabulary).

~~~
mlthoughts2018
I’ve programmed professionally in Haskell, Scala and Python in large projects
across several jobs.

In my experience, the compiler is rarely helpful at catching bugs. Most bugs,
whether in a dynamic typing language or otherwise, are behavioral bugs that
occur at runtime without generating explicit runtime errors, just incorrect
but uninterrupted behavior.

I always heard people make grandiose claims about Haskell, like if you get
your program to compile, it’s very likely to be correct and run without error.
I’ve emphatically never found that to be true.

The types of bugs that compilers can help with are very simplistic most of the
time. Requesting a method or attribute that doesn’t exist, typos, confusion
about arguments passed to a function.

In dynamic typing, you also solve these same things in a super cheap and low
effort way with unit tests.

Additional behavioral unit tests are where real effort becomes required, and
these are needed in any paradigm.

Compilers are also not free. My experience with the Scala compiler for example
was awful. It is so incredibly slow to compile that I absolutely would rather
give up type checking and just use some cheap unit tests to have a much faster
development cycle. When you make 1 or 2 small changes and even a smart
incremental compiler setup like zinc needs to recompile 30 code units and it
takes ~ 3 minutes before you can run tests, this is maddening, and the cycle
repeats the whole time you’re working, so it’s this constant extremely
disruptive expense.

I’ve had similar thinga happen in legacy code bases with Haskell too.

~~~
int_19h
> In dynamic typing, you also solve these same things in a super cheap and low
> effort way with unit tests.

I wouldn't call it "super cheap and low effort", given that tests for stuff
like "what if it's not an integer?" tend to be a large part of the overall
unit test suite in dynamic languages. Just on the amount of code alone, I
would say that it's far less effort to express that kind of stuff in types
than in tests.

~~~
mlthoughts2018
If you’re writing tests like “what if the input is not an integer” in Python,
you’re probably doing a lot more stuff wrong. Dynamic typing is useful
typically because you rarely need to check these types of issues. Instead
you’d have some sort of input sanitizer that is responsible for all such
inputs into a large system, and would include what type of failover logic
should happen upon receiving a wrong input.

If you’re testing all kinds of different functions for argument checks or
defensive programming conditions, it suggests a refactor to a central
sanitizer pattern, and just one small set of sanitization tests. Other
functions can take it for granted that the sanitizer works correctly.

You often need the same thing in statically typed systems anyway for cases
when calculating whether an input is valid involves conditions not encoded by
the type system (possibly even runtime configurable by the user). So except
for all but the most trivial kinds of type validation, this does not actually
constitute a difference in effort between the two paradigms anyway.

------
pron
As someone who practices and encourages others to employ formal methods in
software development, I'm disappointed to read a post that claims FP has some
significant effect on correctness. This has not been established, and does not
at all appear to be the case. There are many aspects, including techniques and
tools, that can positively affect program correctness. The choice of a
programming language or even a paradigm is not high on the list if it is on it
at all. Not only does the little data we have not substantiate the hypothesis
that FP has a significant positive effect on correctness, there are even
significantly fewer anecdotal reports supporting that hypothesis than other,
apparently more effective techniques. In short, any connection between FP and
correctness is, at this point in time, nothing more than wishful thinking, and
the efforts of those who really care about correctness are better spent
elsewhere (i.e. _not_ on picking a new language).

~~~
sevensor
There are certainly a lot of people who _feel_ that way, though. Especially
when the language has an expressive type system. Would you say these languages
merely swap one kind of error for another?

~~~
pron
There are certainly quite a lot of people who feel homeopathy is effective. In
both cases, that feeling is a result of a complex process [1]. Explaining why
it is _unlikely_ (from a theory perspective) that languages like Haskell have
a significant effect on correctness is easier [2], but it, too, requires much
more space. What _is_ simple to show is that organizations that develop in,
say, Haskell do not produce more correct software more cheaply (or some
combination of the two) than those that use other languages.

[1]: For example, in this case, what you call "a lot" may just be the effect
of some sort of enthusiasm that results in publications rather than an actual
great number of people. Also, the feeling can be the result of programmers
facing an unfamiliar language that is challenging them, hence they are
required to focus and think more, and that gives them a feeling that they're
writing something that's more correct. That the languages catches early some
errors may also strengthen this feeling. They're also likely writing smaller
programs. I've programmed in many languages, and one of the languages where I
get it right most quickly is assembly, probably because I write smaller
things, and because I need to focus much more.

[2]: It's relatively easy to classify which program properties can be assisted
by the language and which cannot. Those that can (e.g. memory safety and type
safety) are called _inductive_ (or compositional). They can be helpful, but
the vast majority of correctness properties aren't inductive.
Inductive/compositional means that the property is preserved by all primitive
operations in the language.

~~~
sevensor
Interesting! Thanks for a detailed reply. Since you bring up the cost of
software, do you believe that using formal methods helps reduce costs?

~~~
pron
Formal methods encompass a very wide range of techniques that vary wildly in
cost. When the appropriate technique is applied to the appropriate problem,
then it can certainly reduce costs, even significantly. But like I wrote in
another comment, most of the cost of building software is determined by _what_
we build, not _how_. Tools and techniques, even those that are effective, can
only help so much.

------
perlgeek
This is a very backend/algorithm-centric view of correctness.

What about user interfaces? What is a "correct" user interface? If a bug is
"unexpected" behavior, and my user interface has several thousand users, how
do I manage to not produce unexpected behavior to anybody?

Many software supports workflows and processes in some enterprise, and those
workflows shift subtly over time. How does Haskell help me write "correct"
software in such a context, whatever that means?

How do I deal with low quality of input data, when I can't ask my product
owner to create a matrix of each case of missing data and behavior, because
that would be exponential in size?

Phrased another way, the problems with correctness that seem to trouble the
author are not the problems with correctness that I tend to encounter often in
my work.

The case where there is a clear-cut bug in an algorithm tend to be easier to
debug and fix than a bug that stems from several department having subtly
different ideas of what certain pieces of data and action mean.

~~~
Jtsummers
A workflow can be modeled using something like a state machine or statecharts.
That can be encoded into the type system in a language like Haskell. For front
end stuff, you could model your system using something like [0] which uses
statecharts. An example of unexpected behavior: Is it possible for a user of
your application or site to end up in a state where the only next step is to
close it and restart? Or can they properly navigate from nearly any display to
any other? While the linked tool doesn't directly answer those questions, it
will help you construct your models in a way that you can manually evaluate
them more easily or convert to a TLA+ or similar specification and have the
computer check that.

Could you, or should you, use formal methods for _everything_? No. But it can
be applied pretty broadly if you're open to it, and in fairly lightweight ways
(especially these days since there's been a ton of interest over the last few
years).

Once you start recording your formal (or more formal) specs, you can also
share them more easily. A major problem in coordinating with so many different
groups is that they all have their own model for what's going on based on
their interaction with the systems. By creating more formalized models that
allow for clearer communication (a few statecharts versus pages and pages of
prose; flow charts versus pseudocode) you can reduce the communication error
between teams/departments. And with all the recent work going into it, and
using more capable languages (or tools with less capable languages) you can
encode much of this into your program and make correct system construction
more feasible.

[0] [https://sketch.systems/](https://sketch.systems/)

------
pierrebai
I've been thinking about this subject for a while and been on the verge of
writing about it.

The thing is, I've come to the opposite conclusion.

My conclusion is that programmers have been trained to think about
correctness. What they should be trained on and design is to account for
inevitable failures. My main paradigm is the floating point design. It
incorporates a built-in invalid value, which neatly propagates itself.
Objective-C also has this concept of propagating a chain of failures accross
function calls.

Your programs will be buggy, your data will be wrong. Design to support
propagating it with null actions and detect it at the top of your action loop.

(One of the unfortunate heritage of computin ghistory is the missed
opportunity of the binary system imbalance around zero. Both 1-complement and
2-complement have abnormal values. In 1-comp, it's negative zero. In 2-comp,
it's the fact that the most negative number has no positive value and is its
own negation (- INT_MIn == INT_MIN). It was an opportunity to reserve such
value for invalid integers. Then you design your null pointer to be that value
too, and NAN to be that bit pattern. Than all your fundamental invalid values
map to the same bit pattern that can easily be propagated. Oh well.)

~~~
nickpsecurity
"What they should be trained on and design is to account for inevitable
failures."

That is pretty much the design philosophy of Erlang. You should check it out
if you haven't.

------
perfunctory
Correctness is not an engineering problem. It's an economics problem. As long
as IT Industry is able to extract money from their clients while delivering
crappy software, they will keep delivering crappy software.

~~~
adrianmonk
If you think of it as an economics problem, it raises a question, one which
I'm not completely convinced I know the answer to.

Namely, is crappy software actually more efficient? Does it deliver more value
for less cost? Or is it, instead, that crappy software is a bad value, but
markets are not transparent enough, and it is too hard for clients to assess
whether the software they have received is crappy and too hard for businesses
to convince people to pay them for quality?

If you hire a maid who always manages to miss a spot or two cleaning your
bathroom, does it really matter? Your bathroom is a lot cleaner than it
would've been otherwise, so mission accomplished in the grand scheme of
things.

On the other hand, if you hire a mechanic to overhaul your car engine, you
might not be able to tell the difference when you drive it home from the
repair shop, but if they did a poor quality job, it is going to come back to
bite you. Quality is important even if you need to pay more to get it.

Which one of these two things is software more like? I could see it going
either way. Maybe it depends. Maybe there are ways you can cut corners that
are a win (more value / less cost) and other ways that are a loss.

~~~
OneWordSoln
>> If you hire a maid who always manages to miss a spot or two cleaning your
bathroom, does it really matter?

It depends on the spot missed: is it contaminated with fecal matter? There you
get into the realm of risk management. The problem is that, with software, one
bad mistake can leave the user with complete loss of data. Look at Microsoft's
recent update debacle; some people lost a great deal of data.

>> Namely, is crappy software actually more efficient?

Never. As the Chinese expression goes: Pay a lot, cry once.

The problem is that the people who pay for the creation of software (i.e.
corporate directors) are usually ignorant about all things except for money
and the vague desires of their customers. The folks that can specify and
implement the systems rarely have the clout to allocate the resources
necessary to get the job done well.

>> Which one of these two things is software more like?

Information systems (software that must run over and over again against the
same data, often concurrently) are actually engines in that they must
withstand stopping and starting; of course, the information engines that run
continuously are even more difficult to keep functioning as their uptime
stretches on. Therefore, the answer to your question is that software is most
certainly more like an engine because information systems _are_ engines, just
that their fuel is data (including user input) and their output is information
(data made meaningful to human beings) and changes to its information base.

We can all see how the poorly designed information engines of the world accrue
ugly cruft that eventually leads to their needing to be "re-built", which, for
example for Windows, means re-installing from scratch.

~~~
adrianmonk
Thanks for the thoughtful reply. I feel like our industry is in two camps
right now, one that firmly believes quality always pays off and one which
believes it is more of a luxury.

Unfortunately, I don't see either side backing it up with anything that looks
like real evidence. It's so complicated to evaluate that I don't think anyone
really can prove either way. But you can't not have an opinion since it's an
important question, so it seems like people join one camp or the other mostly
because it suits their personality or working style.

If they take pride in their work and feel disappointment at the prospect of
producing code they can't be proud of, they tend to believe quality is worth
it in the end. If they are impatient and like to wrap up one thing quickly so
they can move on to the next thing, they tend to be in the other camp.

I myself am in the quality and correctness camp, and I really would love to
know that it always pays off because that would mean the way I prefer to
develop software is also the most practical and reasonable way. But I can't
really prove that it's the right view when you consider the economics.

------
mrkeen
> Developers — focusing on correctness. This is the paradigm shift that we are
> starting to see.

I haven't seen this yet, but I sure hope to. I'm stuck in the "everyone does
devops now" paradigm :(

------
skybrian
Unfortunately, there are increasingly important areas where correctness isn't
even well-defined, including social networks, machine learning, performance,
and UI design. Things can go very wrong without violating a traditional
algorithmic correctness constraint.

But it helps to eliminate large classes of bugs where you can, so you can
concentrate on other things.

~~~
microtherion
I agree. Reasoning about correctness can be a very powerful tool for some of
the algorithmic parts of a system. But large parts of a system are not really
amenable to formal reasoning about correctness.

As an example, if you try to write our own web search engine, the linear
algebra or neural network operations you may be using are certainly amenable
to mathematical reasoning, but the relevancy of the returned results is not
something you can prove mathematically.

And in my opinion, this fact often leads "correctness" fanatics to retreat to
safe areas where their methods work, and snipe at all the rest of the world.
Characteristically, the article quotes E.W.Dijkstra, whose much beloved notes
are full of explorations of mathematical toy problems, and catty dismissals of
anybody trying to make real world use of computers.

When Knuth was dissatisfied with the results of early computer typesetting, he
spent years of his life developing TeX. Meanwhile, Dijkstra kept hand writing
his papers and having staff turn them into printed form. That, to me, is the
epitome of the "correctness über alles" mindset.

~~~
Jtsummers
That seems to be an unfair assessment of Dijkstra's interests and activities.
He made major contributions in algorithms, language design, concurrent
computing, and distributed contributing, among other fields. He is basically
the face of the Structured Programming movement (and coined the term) which
led to the common design elements of many languages used today (both the
presence of certain elements, and the absence of others).

I think it'd do you some good to read a bit more of his work and the history
of computing.

~~~
microtherion
There's a good chance I have read more of his work than you have.

Yes, Dijkstra made major contributions to algorithms—exactly the "safe corner"
where correctness thinking applies. Ironically, one of his contributions was
to the concept of semaphores. I sure don't hear any of Dijkstra's fans claim
today that if distributed programs just used semaphores, all our concurrency
problems would be solved.

Another important contribution: The Banker's algorithm, of which Andrew
Tanenbaum observed (Modern Operating Systems, 2nd ed):

> The banker’s algorithm was first published by Dijkstra in 1965. Since that
> time, nearly every book on operating systems has described it in detail.
> Innumerable papers have been written about various aspects of it.
> Unfortunately, few authors have had the audacity to point out that although
> in theory the algorithm is wonderful, in practice it is essentially useless
> because processes rarely know in advance what their maximum resource needs
> will be. In addition, the number of processes is not fixed, but dynamically
> varying as new users log in and out. Furthermore, resources that were
> thought to be available can suddenly vanish (tape drives can break). Thus in
> practice, few, if any, existing systems use the banker’s algorithm for
> avoiding deadlocks.

... which is pretty much my argument: Dijkstra picked himself the neat little
corner that WAS amenable to mathematical reasoning, at the expense of real
world applicability.

One of Dijkstra's contributions to language design was resigning from the
Algol 68 committee. It might do proponents of formal methods good to reflect
on why that language was less than a stellar success, given how strongly it
relied on advanced formal descriptions.

But the question is not whether Dijkstra contributed to algorithms, language
design, or concurrent+distributed computing (he certainly did). The thesis
that the Article presents (and supports by quoting Dijkstra, who did agree
with that thesis) is that formal methods apply universally in system design,
and that today's software woes are largely the result of insufficient
application of formal methods.

It is there that I disagree, and that Dijkstra kept pontificating about
despite having abolished programming and using computers around the time he
started focusing on formal methods. And that detachment from and disdain of
the actual practice of computer use is what renders his later opinions suspect
to me.

------
rglover
The biggest problem is communication and setting expectations. I constantly
hear/see people saying "oh that will be quick!" without any real evaluation of
the work. Simply saying "this is hard, it will take some time" makes focusing
on correctness a hell of a lot easier. In no uncertain terms: don't be a
pushover. If it can't be done well in the amount of time you suggest, don't
say that it can be. You're just digging a hole for yourself (both present and
future) and any one you work with.

~~~
ff_
This is easy to say and not unheard of, but in organizations it is common (in
my experience at least) that if you say too often "this is hard and might take
some time", people will just start to bypass you (and/or your input/judgement)
in order to get stuff done quickly but less correctly.

~~~
rglover
If you explain why they're less likely to. And if they push back, just say
"okay, then I need your okay to rush on this so we can get it done—keep in
mind that it may have flaws and I'd need you to take responsibility for
those."

------
chalst
I hope this article strikes a chord.

Two quibbles: Haskell is prone to something that might be considered a class
of bug that Rust avoid: the space and time performance of idiomatic Haskell
code can be very surprising.

I also wonder at the possible answers you give to " “When” would you say that
the software had a bug?" \- the most obvious answer to me is when the commit
is made to the codebase that introduces possibly observable incorrect
behaviour.

~~~
tombert
I think it's experimental right now, but Idris (which I consider the "Child of
Haskell"), has basic support for linear types [1], which could at least help
the space-prediction problems. Doesn't hurt that Idris isn't lazy-by-default
either.

My point is that I actually think that Idris could end up being successful in
an industrial sense, due to the fact that it gives you all the correctness
guarantees of Haskell (and more), but has managed to (thus far) avoid a lot of
the cruft that has built up in the ~30 years Haskell has been around. Fingers
crossed.

[1] [http://docs.idris-lang.org/en/latest/reference/uniqueness-
ty...](http://docs.idris-lang.org/en/latest/reference/uniqueness-types.html)

~~~
newacctjhro
This stuff feels like a proof of concept that isn't really integrated to the
rest of the language (Kinda like OCaml's OO). For example, it's unfortunate
that Type, UniqueType and BorrowedType are different kinds (in Rust, they are
the same kind; stuff that isn't "unique" just implement Copy)

Rust's advantage is that its borrow system, with reborrowing rules and such,
feels much more ergonomic.

~~~
tombert
You're not wrong, but the fact that some work is being done on this is at
least a good sign I think. A proof-of-concept is necessary before you can
integrate these things organically into the language, and I that as a result
it could be pretty interesting.

If I knew anything about elaborate type systems, I would try and address your
complaints. Sadly, my knowledge of linear logic and whatnot is very ad hoc.

------
melling
There are several strongly typed (and functional) languages being built for
the web that compile to JavaScript:

    
    
       ReasonML - OCaml - https://reasonml.github.io
       PureScript - Haskell - http://www.purescript.org
       TypeScript - https://www.typescriptlang.org
       Scala.js - http://scala-js.org
       Elm - https://elm-lang.org
       ghcjs - Haskell (https://github.com/ghcjs/ghcjs)
    

This podcast covers many of them:

[http://bikeshed.fm/192](http://bikeshed.fm/192)

~~~
chusk3
Don't forget Fable - F# - [https://fable.io](https://fable.io)

------
iamleppert
I just got off a job where my boss was under the impression that bugs are
because of incompetence or laziness. He literally expected the code I produced
to be of perfect quality, or else, using his words “you don’t know what you’re
doing”. I told him about how most major companies have QA teams and engineers
spend a lot of time fixing bugs and reviewing others’ work.

My (now former) boss was non-technical. Wondering if anyone else has
encountered such a person? Coming from other industries, is it common that
people view software engineers as somehow sloppy or lazy in their work? Is it
really true? Is it unreasonable to expect a competent engineer produce
working, correct & bugfree performant code on the first try? I’ve definitely
had my moments of brilliance where it all “just worked” but that usually isn’t
the case and I’m wondering if it could be something wrong with me?

~~~
OneWordSoln
Software is still very much a craft, as opposed to an engineering discipline.
As such, we each have to learn, over time, how to produce defect-free code.
For the capable, concerned and intelligent developer, this means that each new
bug leads to new ways to ensure that such bugs will not happen again in the
future.

So, no, like life itself, we all make mistakes in our systems. The important
thing is that we don't make the same ones over and over again. In software
development, that means adjusting our development methology to make such bugs
less likely in the future. That is why software development is both the most
challenging and most rewarding of careers. Of course, I _may_ be a bit biased
;-)

As far as encountering crappy managers, I'll just say that I can count the
good managers I've had on one hand and still have a couple of fingers left
over. As to your former manager's particular flavor of belligerence, I haven't
had that one specifically, but there are very much uglier variants, my friend.
My understanding is that one cannot be made a manager unless the person is
willing to prioritize money over human beings and it doesn't take a biblical
scholar to see how that is causing so many problems in the small and large
across the world. In the small, that attitude manifests itself in many ugly
personality traits, while being the foundation of the entire structure and
intention of the for-profit corporation where the vast majority of us are
forced to find our work.

------
jugg1es
I'm honestly having a hard time figuring out what the take-away from this
article is. It claims to to lay out a paradigm but it doesn't really seem to
lay it out all that clearly.

------
fjfaase
Software engineering is still mostly a craft, not a type of engineering. We
should not start another FP versus OOP war. I think it misses the point. We
should start thinking about languages where it is possible to precisely define
implementation, like: X implements Y using representation R under conditions
C. For example, a CPU instruction for adding two numbers works with modular
arithmitics presuming a certain representation of numbers, nowadays, usually
2-complement with a certain power of 2. If you want it to represent the
addition of two numbers then that is only possible if the sum of the two
numbers does not cause an 'overflow'. There are algorithms to implement
additions of much larger numbers using other representations and combinations
of machine instructions. Writing an implementation for a certain problem, like
summing a sequence of numbers, should begin with specifying the
characteristics of the kind of sums one want to calculate. If one would have a
library of implementation, one could engineer a 'correct' solution, without
having to write an implementation by yourself.

------
kaveh_h
I started learning to write Haskell a while ago. My current reflection is
while it’s easy to start learning to write simple code and correct code it a
much larger step to actually be able to write performant code that is generic.
Performant idiomatic Haskell requires a good enough understanding of the
functional paradigm.

~~~
BWStearns
I experienced the same thing. Once you get over the practical application of
monads hump it becomes a lot more like the progress experience in other
languages. You don’t need to achieve your “monads are a burrito” moment to
successfully write complex Haskell, you just need to be able to cargo cult
while recognizing that you are doing so and the understanding will slowly
accrete. That said, Haskell is the first language I’ve worked in where there
are periodic learning cliffs (lenses stand out here) where when you scoot into
a slightly foreign domain you need a few days of study. I still think the
juice is worth the squeeze though since once you learn how to do the thing
then doing the thing correctly tends to be easier in more forgiving languages.

------
EdSharkey
"Make it work, Make it right, Make it fast." \- Kent Beck

My take on correctness is this: for ethical reasons, a developer should
__never __be the final say on whether their solution is correct.

The users, whose voices are concentrated in the one and only Product Owner,
should always have the final say on the correctness of our production code.

The tricky bit is proving to the Product Owner and ourselves that we have met
the users' expectations.

I believe this is where testing and refactoring comes in. Language and
programming paradigm choices are window dressing if the actual code is a
slapdash mess with no formal proofs of correctness.

Until some genius comes up with something better, those proofs must take the
form of fully passing, comprehensive test suites.

------
wellpast
> what is the language and mindset of “objects” with dynamic dispatch
> providing you apart from an endless stream of bugs that seem to keep
> reoccurring everytime you try an evolve the software to introduce a new
> requirement?

The only mindset required to avoid long-term bugs and maintenance problems is
a _personal_ conscientiousness, a _personal_ approach to defensive programming
based on (ideally, extensive) _personal_ experience, which will inevitably
include a very-hard-earned skillset of _decoupling_.

Everything else is trivial (borderline bike-shedding) in comparison.

Static types, OO, functional programming, agile, TDD, etc, etc -- these will
not save you from your own _personal_ deficiencies. Only long, hard _practice_
will.

A skilled practitioner can build a large, complex system in Visual Basic or
PHP using entirely OO idioms. Now he or she may do a somewhat quicker (and
happier) job if he or she is able to employ some convenient tools of the trade
(eg syntactic niceties, functional-orientation, immutable libs, testing libs,
etc etc etc). But the quality and manageability of the system is entirely a
function of experience-based skillsets that far transcend the particulars of
the programming paradigms that we currently have available to us.

I've seen many a strong static typer, who thinks they have found God, make an
absolute mess of everything _including_ its leverage of the type system over
time.

I've seen many a developer write OO code within a functional language and vice
versa and the success or failure in these cases is irregardless of even the
slight misapplication of the tool in hand.

Can you articulate a real world problem into a sound domain model? Can you
decouple the behaviors of your system into composable parts? Are you
conscientious enough to informally "prove" the transformations (ie git
commits) you make to your software system?

If you can do these things, you avoid a mess of a system _regardless_ of your
PL and programming paradigms.

If you cannot do these things then no matter what kind of process you bring to
the table, you will not be saved.

~~~
demilicious
This makes a whole lot of sense to me. However, one of the elements of truly
productive learning is _focused_ practice; that is, practice with mindfulness
and intentionality directed at a specific element. One won't necessarily get
any better at decoupling unless it is specifically practiced. To that end,
_how_ would one truly practice the skill of decoupling? Or maybe, what
resources or heuristics or strategies would you suggest to help draw the lines
between behaviours or modules?

The first thing that comes to mind is Gary Bernhardt's talk on Boundaries, or
"functional core, imperative shell", but I'm not sure that this is entirely
what you're discussing here.

~~~
wellpast
I think you're hitting the nail on the head. It is specific and _focused_
practice. I've been thinking about working on a book or set of blog posts or
something to try to flesh out what a regimen would look like.

Most writing on software of course is technical (on PLs, algorithms,
processes) but there seems to be very little to guide the software
practitioner who really wants to seek mastery.

> what resources or heuristics or strategies would you suggest to help draw
> the lines between behaviours or modules?

The best I can come up with for teaching this would be to create some concrete
"problems"/scenarios that would exercise the skill set, and then reveal
various ways the problem could be decomposed and quantify the degree of
decoupling and even reveal the practical consequence of the coupling by
introducing new "requirements"/dimensions to the scenario.

------
goto11
I fix software all day, but very few of the defects are really about
"correctness" in the strict sense of code implementing the specification
incorrectly. Most often it is simply that the "specification" was
bad/wrong/incomplete in the first place.

I appreciate that in some cases the specification is straightforward and the
challenge is to implement it correctly. But I think in most software
development it is really the other way around: Designing/specifying the
expected behavior is the hard part. It is pretty straightforward to implement
this behavior correctly.

------
User23
Without a specification the correctness of a program is necessarily undefined.
This means that most buggy software isn’t incorrect. In fact most buggy
software isn’t even unpleasant, at least insofar as users prefer using it to
not and continue to pay.

It follows that as commonly used the term bug refers to a pleasantness defect
rather than a correctness defect. And this stands up to our day to day
experience. In the overwhelming majority of cases users report bugs because
they are displeased by a behavior, not because they studied the specification
and discovered a correctness error.

------
bsmith
I know this is a nit, but the incorrect use of commas is making this a really
tough read for me...pet peeve of mine, I suppose. Anyway, salient points: use
the right tools for the right jobs and all that jazz. Depends on the required
level of correctness for the application. Are we coding for an airplane
control system, or a slack bot that lets us know when the build biffs?

------
keepmesmall
If you notice bugs, you should do more maths and get tweaking. Wait...
something's not quite right there.

On a more serious note: the biggest barrier to correctness I've encountered is
doing things I don't understand. I also care less and less for being paid to
understand particular things, it's so much worse than just being paid to do
work.

------
luord
Yet another generic and sophomoric "the paradigms and type systems I prefer
are so much better guise!" Piece.

This one is particularly obnoxious because he's aware of the tribalism and
thinks that by mentioning it then his post is magically no longer merely more
fuel to the decades-old flamewars.

------
jasonhansel
Haskell has existential types, which are specifically used to implement
dynamic dispatch. The problem here is weak type systems & excessive
statefulness, not dynamic dispatch per se.

------
julius_set
I disagree, based on this articles logic everyone should strive for a perfect
codebase which is impossible.

You will always run into issues because code just like matter decays to
entropy.

------
pojzon
Pretty good piece that brings to my mind software craftsmanship principles.

