
Why I never finish my Haskell programs - AndrewDucker
https://blog.plover.com/2018/09/03/#what-goes-wrong
======
pash
I think many beginning Haskellers have this problem. To overcome it, my advice
is to write Haskell code with the knowledge that you can re-write it more
readily than you can re-write code in many other languages. Write the code
that fits the immediate application, and rely on the type-checker to make it
straightforward to refactor when the need arises.

I think that’s what many experienced Haskellers would say is the language’s
best attribute for getting things done, that the type system makes it possible
to refactor even a large program with the confidence that all the parts you
replace will slot perfectly back into the original structure. Or that changing
the core structure itself will result in a new structure that has all the
right slots for all the various bits and pieces that need to slot into it.
Having the confidence that you will be able to refactor painlessly, you should
be less concerned with finding the perfect abstraction up front. Write code,
make it work, then make it better.

And as others have mentioned, yes, the right abstraction and appropriate level
of generality will become easier to recognize as you write more Haskell. Go
with the first decent implementation you can come up with, and as you gain
experience that first implementation will more and more often turn out to be a
good one. In the meantime, run HLint and read more Haskell code, and you’ll
quickly pick up most of the generalizations that really make sense to use in
typical applications. The more experienced you get, the more confident you
should become that significant time spent generalizing code to no real purpose
is pointless and tends to result in code that both reads worse and runs worse
than what you started with.

~~~
nextos
In my case, I think it's because large programs always end up containing
subproblems that can be better expressed in other paradigms than functional.
And it becomes frustrating when I can't shoehorn them to the Haskell way of
doing things.

My favorite languages are, for this reason, multi-paradigm: Common Lisp,
Mozart/Oz, Scala and C++. It's a bit like building _La Sagrada Familia_ (and
that's why it's depicted in the cover of CTM). If you want a superb solution,
you end up using many styles like Gaudí did.

But I reckon the future will move towards more provably correct solutions, and
we will be using things closer to e.g. Idris. Hopefully that's orthogonal with
homoiconicity.

~~~
tincholio
I agree with your sentiment, but I find the analogy to La Sagrada Familia
funny, in that is has still not been finished, it's a black hole for money,
and you can see, clearly, it's a hodge-podge of styles... All characteristics
that might not be good for your software project.

~~~
m_mueller
which large software project has ever been ‚completed‘?

~~~
lasagnaphil
A lot of single-player (not online serviced) computer games could be regarded
as ‘completed’: once they’re out on the store shelves, it’s pretty much done
(although with the rise of Steam and continuous updates this is less of a
case)

That’s probably why many game programmers are more pragmatic in their
programming practices: they have a concrete deadline to pursue, with a
predictable subset of hardware for their program to run...

~~~
m_mueller
point taken, a lot of software in the pre internet era could be considered
completed - on the other hand there's always still going to be issues, so you
could also claim that these projects are just abandoned.

In general I think software is more like a house than an art piece - it keeps
adapting as long as people use it.

------
hhmc
I'm reminded of one of my favourite HN comments on Haskell:

'There's something very seductive about languages like Rust or Scala or
Haskell or even C++. These languages whisper in our ears "you are brilliant
and here's a blank canvas where you can design the most perfect abstraction
the world has ever seen.'

[https://news.ycombinator.com/item?id=7962612](https://news.ycombinator.com/item?id=7962612)

Although it's not 100% applicable in this case (unless you argue that the cost
is to your own time) - I think the sentiment is perfect.

~~~
zengid
I think Rich Hickey described it pretty well when he said (paraphrasing from
[0]) that static languages present the programmer with neat little puzzles to
solve that _feels_ like we're writing applications but we're just creating
intricate types and abstractions. I think he has a point, but I certainly
don't want to give up the benefits of static languages, like being able to
catch all of my silly errors.

[0]
[https://youtu.be/2V1FtfBDsLU?t=39m44s](https://youtu.be/2V1FtfBDsLU?t=39m44s)

~~~
codebje
His argument begs the question.

If you pre-suppose that describing types and finding appropriate abstractions
aren't "writing applications", that is, they offer no value in the process,
then spending time doing them is of course solving neat little puzzles to no
benefit.

On the other hand, if you pre-suppose that types and abstractions offer some
value to the process of writing an application, then solving those puzzles is
adding value.

Calling types and abstractions a "neat little puzzle" is a diminution we could
apply to other aspects of writing an application: implementing an algorithm to
correctly process some data is a neat little puzzle presented by our test
cases.

Rich Hickey would need to demonstrate the lack of value of types and
abstractions for producing applications to make a solid argument here, and
that I believe is a steep uphill battle, because Clojure would be a pretty
terrible language if you took out the ISeq abstraction, and there's an awful
lot of effort spent shifting the puzzle of "is this function being used right"
from the type system game engine to the test harness game engine.

~~~
rictic
Time spent with a type system buys you compiler-checked proofs of some
properties _. The important question is whether the time is worth the benefit.

One of many things that I love about the TypeScript type system is that it
gives so much power to the author. It is the most expressive industrial type
system I know of, but it also makes it easy for the programmer to say "I can't
prove that this is true, just trust me that it is".

_ A nice side benefit is that this is also practice, so you can also gain
skill. This is how I justify sometimes spending much more time with the type
system than it would otherwise be worth when learning a new language or
hacking on a personal project.

~~~
lmm
> One of many things that I love about the TypeScript type system is that it
> gives so much power to the author. It is the most expressive industrial type
> system I know of, but it also makes it easy for the programmer to say "I
> can't prove that this is true, just trust me that it is".

I really wanted to like Typescript, but once you're thinking in HKT it's
incredibly frustrating to have to manually translate a function into its
flattened expansion (just like it's incredibly frustrating to use a type
system without generics once you've used one that has generics). Every serious
industrial language allows a programmer to say "I can't prove that this is
true, just trust me that it is"; I suspect many people who struggle to start
out in Haskell would do well to make a little more use of unsafeCoerce and
unsafePerformIO (they would no doubt give themselves runtime errors, but
sometimes the easiest way to understand why you had a type error is to run the
code and see what the values are at runtime).

(I made a small hobby tool with ScalaJS and was amazed how easy it was, so
I'll be advocating for that over Typescript).

------
tannhaeuser
Coming from Prolog I'm loving Haskell, but I'm seeing a special kind of "worse
is better" at work here: that projects using innovative and sophisticated
languages have a high risk of running into obsessive "getting it right" and
"holier than you" mentality resulting into them often getting never finished,
and even if finished, having a high barrier of attracting contributors. It's
unfair and embarassing, but shitty languages like JavaScript and PHP often
allow you to be more utilitarian and churn out good enough code because you're
not emotionally attached to them, and aren't under peer group pressure to
express eg. algebraic properties in their purest form or some such.

~~~
pwm
I'll put a slightly different spin on this: At my current job the system I'm
writing is in PHP (for reasons...). In its core it's all about domain
modelling with some workflow sprinkled on top. Haskell would be a near perfect
fit, but it can be done in PHP. However the code itself looks very Haskelly. I
have a growing library of domain specific types that are composed into larger
and larger tree shaped ADTs all the way to the top level entities. Validation
is mapping/folding these ADT trees where nodes are (type, data) pairs that are
mapped to their instantiation or its failure. The workflow bit is essentially
a couple of FSMs with conditional transitions where the condition is usually
the existence of some type that fulfils its constraint. Etc...

Reading it back it sounds analogous to the classic saying of one can write
fortran in any language. In my opinion having experience with Haskell gives
you a mindset first and foremost. When you bump into a problem where this
mindset is a good fit you can use that knowledge with whatever tools are at
hand.

------
yogthos
I definitely find there's a strong relationship between language complexity
and bikeshedding. When you have a big language like Haskell or Scala, it's
easy to get distracted from solving the actual problem by trying to do it the
most "proper" way possible. This is also how you end up with design
astronautics in enterprise Java as well where people obsess over using every
design pattern in the book instead of writing direct and concise code that's
going to be maintainable.

Nowadays I have a strong preference for simple and focused languages that use
a small number of patterns that can be applied to a wide range of problems.
That goes a long way in avoiding the analysis paralysis problem.

~~~
mac01021
I don't disagree in general but is Haskell a big language?

~~~
wtracy
Haskell has a ridiculous number of obscure operators. Here's a list of "common
surprising" operators in Haskell:

[https://haskell-lang.org/tutorial/operators](https://haskell-
lang.org/tutorial/operators)

~~~
dpratt71
I don't think it makes sense to characterize Haskell as "big" on this basis,
because 1) it is trivial to define an operator in Haskell, so there's bound to
be a lot of them and 2) even the "standard" operators typically have a simple
definition (e.g.
[https://www.stackage.org/haddock/lts-12.9/base-4.11.1.0/src/...](https://www.stackage.org/haddock/lts-12.9/base-4.11.1.0/src/GHC-
Base.html#%24)).

~~~
village-idiot
On the whole, I consider user-defined infix operators to be a huge mistake.
While the few common ones are great, the ability for every single library
creator to add their own infix operator turns into a mess in the long run.

~~~
gowld
They fine inside a limited domain-specific scope, just don't go importing
operators from many libs willy-nilly.

~~~
village-idiot
It’s very hard to convince people to keep them in that limited scope.

------
ocharles
There's a weird interpretation here that this post is the author expressing
frustration with this process. I often have a similar experience and I
wouldn't want it any other way! This process of repeatedly asking "what _is_
this?" just doesn't seem to come up in the same way in other languages. This
gives me the ability to do some practice I wouldn't otherwise be able to do,
one that often has tremendous transfer over to "real work", because I can
start to see patterns and get a feel for what is really going on once I get
rid of all the dull IO tedium.

If you want an analogy, consider this like studying jazz or something. Sure,
you could just notice a II V I progression and call it done, but if you pick
away at each individual note, you can find a whole lot more going on behinds
the scenes.

Basically, I don't really consider what's happening in the blog post a bad
thing. It just has a time and a place, and you need to be aware when it's the
wrong time.

------
rossdavidh
I've seen this, and I've never had a Haskell gig. One of the best pieces of
advice I ever got re: programming was, "don't write the abstraction until
you've written three cases first". This is good advice in the intended way
(you will write the abstraction better when you get to it), but even better
because you probably often won't ever write three of the thing in question, in
which case you shouldn't write the abstraction anyway.

~~~
vmchale
> One of the best pieces of advice I ever got re: programming was, "don't
> write the abstraction until you've written three cases first". This is good
> advice in the intended way (you will write the abstraction better when you
> get to it),

I don't think this is good advice in the context of Haskell. Haskell allows
some abstractions that aren't just "black boxes" or glorified templates. It's
of kind like elementary logic: the more models there are, the fewer proofs
there are, and vice versa. Analogously, when you write a more abstract
function, there are fewer ways you can manipulate it and thus it is in some
sense simpler.

~~~
dan-robertson
I half agree with you. I think the “abstraction” your parent is talking about
would correspond to typeclasses rather than polymorphic functions. That is, I
think that this would be a reasonable type for a function even if it is only
called once:

    
    
      Eq a => a -> MyObj a -> Maybe Foo
    

Whereas I think the following would not be ok (probably even if you had a lot
of instances)

    
    
      class HasFoo o where
        getFoo :: Eq a => a -> o -> Maybe Foo
      
      instance HasFoo MyObj where ...
    

I think the rule should be that one should write the most general code that
minimises the entropy/size of the source code. That way one can prefer
polymorphic functions (as they need less type-signature entropy and bytes,
unless they have loads of constraints, in which case one should consider
wrapping those constraints together), while still preferring not making crazy
single-instance typeclasses.

------
gabipurcaru
I think the main reason is that there is no _actual_ problem that OP needs to
solve. If there was one, then he would get pragmatic and figure out one of the
reasonable solutions to this and move on with his life.

Though it's true that Haskell is easy to put you into a mindset where you want
to simplify and generalize the code as much as possible, leading to wasted
time on overly general solutions. Which shouldn't be the case, because e.g. if
you want to extend the solution from lists to traversables, Haskell gives you
the confidence to safely refactor the method at a later time.

~~~
sdegutis
I lost confidence in Haskell's ability to let me write something one way and
safely refactor it later, when I found out that you can't use a ton of the
algorithmic functions in the standard library because they do things all
wrong.

~~~
KirinDave
What?

Other than strings desperately needing to be purged from the library, what are
you talking about?

Haskell has some really solid standard libraries, and it's extended library
set has some of the most sophisticated algorithms packages in the world.

~~~
hhmc
I'm not the commenter you're replying to, but I've often found the Haskell
numeric classes (Num, Fractional, Integral etc) prickly. The _almost_ , but
don't quite, map to (mathematical) algebraic structures.

~~~
gizmo686
Having use Haskell for math programming, I agree with this sentiment.
Haskell's standard classes are in an uncanny valley of matching the
mathametical structures.

If you want to do that sort of thing with Haskell, I would suggest switching
to the numeric prelude [0]

[0]
[https://wiki.haskell.org/Numeric_Prelude](https://wiki.haskell.org/Numeric_Prelude)

------
thomasjm42
I think the problem in this case is that the author's attempt at
generalization went off in the wrong direction.

The fact that fixed-length lists aren't working well as a representation for
polynomials is a hint. Polynomials with real coefficients form a vector space
[0], so you should really think of them as infinite-dimensional lists of
numbers (in which most of the numbers are zero).

Once you know you want to represent an infinite dimensional vector with only a
few nonzero entries, you can use a sparse vector. The first library that comes
up when you google "Haskell sparse vector" is
`Math.LinearAlgebra.Sparse.Vector`, which lets you write something like this
(I haven't run this code but it should get the job done):

import Math.LinearAlgebra.Sparse.Vector as V

poly1 = V.sparseList [1, -3, 0, 1]

poly2 = V.sparseList [3, 3]

sumPolys = V.unionVecsWith (+)

So, I read this more as an article about trying to reinvent the wheel in a
domain which isn't necessarily simple, which isn't a good idea in any
language.

[0]:
[https://en.wikipedia.org/wiki/Examples_of_vector_spaces#Poly...](https://en.wikipedia.org/wiki/Examples_of_vector_spaces#Polynomial_vector_spaces)

------
aidenn0
I see this a lot with intermediate lisp programmers; they spend so much time
building ivory tower abstractions that the original problem is forgotten. I
sometimes call this "bottom down" programming.

Predicting the future is very hard; remembering the past is much easier. If
you find yourself typing the exact same pattern for the Nth time, then it's
time to refactor it into a macro or a function as appropriate.

Figuring out what parts of the next 1000 lines of code you are going to write
will benefit from an abstraction (and which abstraction that is) is a rare
skill that comes only (if at all) with experience.

~~~
sokoloff
“Bottom down” really resonated with me in my dalliances with both Common Lisp
and Haskell.

~~~
aidenn0
I first heard the term "bottom down" from my dad a long time ago, but he used
it to mean any sort of programming without a plan. It was many years later
that I applied it specifically to people doing "bottom up" but get so obsessed
with building the perfect base that they never solve the original problem.

------
agentultra
The turning point for me was when I realized that these problems exist in
other languages and are practically invisible. Without a good type system and
inference you cannot hope to catch all of your type errors. You'll just write
some unit tests and run your program many times until you're certain you've
sussed them all out... until that pesky bug report comes in. Then you get to
play detective!

I honestly don't have time left in my life for such meaningless drudgery.

With a type system I have the computer aid me in designing the program. It
keeps me honest and ensures that I don't have type errors which are are huge
class of things I'd rather _not_ have to think too hard about.

When I program in Haskell I spend more time solving problems than fixing
programming errors.

------
mlthoughts2018
In Haskell (and similar), the language offers the ability to really stab
genericization in the heart in a good way, to do it the right way. But 90% of
the time, you shouldn’t and it’s very hard not to. Not a matter of restraint,
but the language makes it actually very hard to get simple things done unless
you plug into the abstraction vortex.

Conversely, languages like Java, C++ and Python (if you use classes), make it
very easy to write simple things without abstraction, but virtually all use of
abstraction goes off the rails immediately and everything shoots you in the
foot, so that _good abstraction_ is not even really a thing at all.

Pick your poison!

~~~
emmelaich
True but I still manage to go down some rabbit holes in C++.

I could use the older STL iterators, but nooo I use a range, with a lambda.
Returning a tuple with tie and pair and some more stuff from the latest C++1x
standard.

And in Linux userland, I just want to check a pid but I invent some smart
locking system to ensure I never get a false positive or negative.

~~~
HelloNurse
In C++ the main actual rabbit hole of useless abstraction begetting useless
abstraction is premature generalization from what you actually need to
templates that can be used with types you don't need, which then proceed to
kick you with implicit assumptions, type traits, unforeseen subtle type
distinctions like those that come up in Haskell.

Other kinds of "stuff from the latest C++1x standard" tend to consist of
relatively harmless novel ways to express something, usually not intrinsically
complex and, when inappropriate, causing only localized damage (typically
puzzling syntax or slightly wrong declarations with no impact on unedited code
parts, even in the same class or function).

------
KirinDave
You could also just: write the less general version and stop listening to
folks who flip and scoff at every piece of code isn't maximally general.

Crazy, I know, but especially when were doing labor in industry, even without
maximal generality your code is probably going to outlive its patron
corporation and then die in obscurity.

~~~
gowld
The only person who flips and scoff at every piece of code isn't maximally
general is... the author the code. That's the problem.

~~~
KirinDave
That's not true. There are folks in the community who absolutely DO put
pressure on open source libraries to be maximally general (or to use THEIR
abstractions over others).

This fosters an environment that might already lead people to second guess
themselves, because it's so big and new.

------
ianbicking
I always felt very productive in PHP, because the only rewarding part of PHP
is having made something. It never rewarded sophistication... but making a web
site that did something WAS rewarding, so all my attention went to that part.

Calling Haskell an anti-PHP seems fair.

~~~
psergeant
The same author of the original explores this theme with Java:
[https://blog.plover.com/prog/Java.html](https://blog.plover.com/prog/Java.html)

~~~
LandR
That perfectly describes Java (and C#).

It's the land of mediocritity.

------
ainar-g
>I ought to be able to generalize this

I've never understood this. Unless you write a library that you plan to
publish, or already have actual cases where you need a more general solution,
why spending time trying to generalise code instead of switching to the next
task?

~~~
kelvin0
My tentative answer is this: someone who uses Haskell appreciates elegant
solutions (a.k.a mathematical/functional) and is inclined to write things
'properly' once and they might also idealize that the functions they write
will not only solve this current issue, but be useful to others and themselves
in other programs ... thus going down the generalization and elegance rabbit
hole.

Of course, all of this is purely speculation on my part.

~~~
galfarragem
Resuming: the typical Haskeller is a perfectionist.

If I allowed, without blinking, my perfectionist self would ditch every
language but Haskell. No other mainstream language can give you more control
and purity. For a perfectionist this is opium.

~~~
cortesoft
It is a particular form of perfectionism. Other types of perfectionists might
want to be perfect at writing programs as fast as possible.

------
jerf
You can eventually "come out the other side" and get to the point where you
write the general version correctly the first time. But it is some degree of
work. I think it's a good exercise for a pro, but you can certainly live
without it.

The general principle does come in handy elsewhere, though. Doing the most
useful work with the minimum power is a generally useful skill. I get a lot of
mileage out of it in other languages, because across a couple hundred modules,
the difference between modules that have minimum dependencies and modules that
carelessly overuse power becomes quite substantially different in character.

~~~
ajross
> You can [...] write the general version correctly the first time. But it is
> some degree of work. [...] Doing the most useful work with the minimum power
> is a generally useful skill.

There's something wrong with that logic, but I'm too lazy to work out the
proof in the general case.

~~~
jerf
When people wonder why my typical comments run on to multiple screenfuls, it's
because I'm armoring them against this sort of dismissive snark. I was in
between tasks today and lacked time to make it longer.

------
jonalmeida
I think there's an error in the first example.

    
    
      Poly [1, -3, 0, 1]
    

Should be:

    
    
      Poly [1, 0, -3, 1]
    

EDIT: My mistake.

~~~
laurentl
I thought so too at first but given the way addition is defined later on, it
makes sense to keep the coefficients sorted by increasing power (the leftmost
element in the list is its head, and the easiest to access when doing anything
recursive)

~~~
rzzzt
Evaluation also becomes easy this way, using Horner's method:
[https://en.wikipedia.org/wiki/Horner%27s_method#Python_imple...](https://en.wikipedia.org/wiki/Horner%27s_method#Python_implementation)

~~~
abecedarius
But this doesn't argue for the low-to-high order, because of reversed(). This
code would be simpler and faster with the coefficients in the opposite order.

~~~
rzzzt
You're right, the process does start from the higher coefficients, and so does
not really support the ordering presented.

You either need to use foldr to defer the multiply-and-add until the end of
the list is processed, or reverse the list before processing it.

------
jlebar
I am not a Haskell programmer, but is this correct?

    
    
        (Poly a) + (Poly b) = Poly $ addup a b   where
           addup [] b  = b
           addup a  [] = a
           addup (a:as) (b:bs) = (a+b):(addup as bs)
    

Imagine a simple example, adding `x+2` and `10`. In OP's representation, these
would be represented as the lists [1, 2] and [10]. That is, the first element
is the coefficient of the term of _highest_ degree.

But doesn't this implementation add list elements left-to-right, so we'd end
up with the result [11, 2] instead of [1, 12]?

~~~
desdiv
You're misreading OP's representation:

> The polynomial x^3 −3x +1 is represented as Poly [1, -3, 0, 1]

It starts with the 0th coefficient and goes up. So adding `x+2` and `10` would
be zip-adding lists [2,1] and [10,0].

~~~
jlebar
>> The polynomial x^3 −3x +1 is represented as Poly [1, -3, 0, 1]

> It starts with the 0th coefficient and goes up.

Is a list in Haskell canonically written in the opposite order of what I
expect? I expect that the 0'th element of the list [a, b, c] is `a`. In
Haskell, is it `c`? Assuming `a` is the 0th element, then the coefficient for
the highest-degree term is the 0th element of the Poly. And since the degree
of the two polynomials doesn't necessarily match, matching up the two highest-
degree coefficients and adding them is obviously wrong.

Or am I going crazy here?

------
Jeff_Brown
"Doc, it hurts when I do this." "Don't do that."

------
skybrian
Similar things happen in other languages. Most recently, I started writing a
program in Elm to try it out, realized I wanted to use some CSS, and then got
distracted looking at the various ways to do that, with their different
tradeoffs. (What is this stylish-elephants package?)

Sometimes it's more productive when you join a team that has already decided
on its standards. You don't learn as much, though.

------
village-idiot
This is me. I have a backlog of personal projects that I've slowly burned
through and every freaking time I start with Haskell and end up in Rails. I
love working in Haskell ... in theory. In practice I spend way too much time
figuring out how to wrangle data into the correct shape when it would've taken
me 30 minutes to accomplish in any other language I know, static or dynamic.

------
misja111
It's about Scala, not Haskell, but the gist is the same:

at my company we're giving new candidates a live coding interview where they
get one hour to write a very simple application using either Java or Scala.
Candidates are free to choose between those languages.

The funny thing is, that candidates who choose Scala are never able to fully
finish the assignment. Even though the application is really simple, many
don't finish half of it and some even get completely stuck in complex for-
comprehensions and what not. Candidates who choose Java however mostly are
able to finish the assignment. The code might not always be the most elegant,
but it does what it is supposed to do.

Even though I like Scala a lot, I feel it has the downside that it gives you
too many options to do the same thing. This can get in the way when you are
simply trying to implement some basic business feature.

~~~
amelius
So Scala is harder to write, but the question is: is it easier to read?

(Since in general a particular piece of code is read far more often than it is
written).

~~~
misja111
That's an interesting question although it's not directly related to the
article.

I think it is definitely possible to write Scala code that is easier to read
than the same Java code. However it seems that a large part of the Scala
community does not see this as their main objective when writing code.

Sometimes the focus seems to be on writing code as terse as possible, which is
not the same as readable. Or the focus is on making code more generic and
abstract, which can be a useful goal depending on the use case, but it's
definitely not the same as readable.

------
thomasfedb
Writing elegant code that never works can easily take me twice as long as
writing okay code that actually works.

~~~
sverhagen
You're phrasing that in a seductive way. "Okay code". Maybe that's good
enough, right? Maybe not? Someone's "okay code" may be someone else's "bad
code". I'm fixing someone else's "okay code" on a daily basis since it's
riddled with bugs. Somehow I'd expect that if they would have had the mastery
to write elegant code, they may also have been able to make it less buggy, or
at least to make it easier for me to fix it.

Also, as with code that's elegant, or well-tested, or specified really well,
aside from all those explicit qualities, all of these are also just additional
touch moments for the author in question to discover and fix the bugs before
it gets handed off, and to become my problem on some future date.

(Not saying you should gold-plate it into elegant code. Just saying that there
is value that should not so easily be dismissed.)

~~~
thomasfedb
I think for me, "okay code" means that it's functional, debuggable, and up to
professional sniff, but nothing flashy. As opposed to sexy-cool-trendy-meta-
wow code.

------
engi_nerd
I heard this gem a couple of weeks ago:

"Don't engineer things to failure."

When you find yourself saying "I ought to be able to generalize this", that's
when you need to stop and just write the code you need to write NOW. Just
because you can, doesn't mean you should.

------
jrosenbluth
I think the main problem with the specific example is that a list is the wrong
data structure to represent a polynomial. Even a simple `IntMap` would be
better, where the key is the power.

------
codedokode
This is clearly overengineering. There is no need to write a generalized
function because it is only useful for adding polynomials.

And a function that works only with polynomials will be easier to read.

------
wtracy
I have to ask: does the author not have the same problem in other languages?
If not, why not?

Any major language has dozens of libraries that do almost-but-not-quite what
you want. Any language with support for mactos, templates, or generics offers
opportunities to write unnecessarily generic code. (The authors of Spring
managed to get into that tarpit even before Java introduced generics!)

~~~
jernfrost
I write in many different languages, but I would say this kind of thing
depends a lot on the type of language.

With Go, e.g. I would not spend much time on this, because it is such a plain
language. You just accept there is no fancy way of doing this and move on.

With C++ I found it a bit different. Either I mess around thinking there MUST
be some way of solving this annoying problem only to realize there just isn't.
Other times I find a solution but it tends to turn into a horrible ugly syntax
mess, so you abandon it. I've pretty much given up writing anything elegant or
fancy in C++. It just turns to shit so quickly.

Swift I found fairly straight forward to write. All the typical stuff that get
me stuck in C++, never seems to pose a problem for me in Swift.

Julia is the language which ought to have thrown me into the Haskell problem
described, as you can do a lot of crazy stuff with types and macros. I do to
some degree but mostly Julia just does what I want it to do. I guess it is
partly down to the core functions being well designed and that despite the
flexibility it is still not as magical as Haskell, Clojure etc.

------
ssadler
Polynomial long division in Haskell:
[https://github.com/libscott/math/blob/master/src/Math/Polyno...](https://github.com/libscott/math/blob/master/src/Math/Polynomial.hs)

------
kccqzy
In any language, premature abstraction is bad. That applies whether you use
type classes in Haskell or classes in Java. It takes experience to judge at a
moment's notice whether something is worth abstracting. Beginners frequently
get this wrong.

------
l0b0
There are lots of relevant programming aphorisms: "Write the simplest thing
that could possibly work." "YAGNI." "KISS." You either have a problem you need
to solve or not. If not, why do you expect the process to ever finish?

This is one area where TDD really shines. You write a simple failing test and
implement something that makes that test pass. If that isn't sufficiently
generic you write another test, make _both of them_ pass, and refactor until
the solution is as simple as possible while the tests pass. Repeat to get to
where you need to go.

~~~
AlexCoventry
There needs to be a balance. You should choose an architecture which will
accommodate predictable future needs without too much refactoring. A suite of
unit tests can't help you much if you need to unmangle a bunch of severe
abstraction violations between what you now need to be well-encapsulated
components.

~~~
l0b0
Actual red-green-refactor TDD results in the simplest possible code to solve a
problem, IME. Over- or under-abstraction tends to be pretty well instantly
recognizable when working with a well written test suite, and can then be
easily and safely refactored away.

~~~
AlexCoventry
The simplest possible code for the current test suite is not always strategic.

------
dustingetz
1) to me, this is the difference between haskell and Clojure 2) in the future,
normal people will be able to code, so work backwards from that

~~~
xamuel
>in the future, normal people will be able to code, so work backwards from
that

This.

I'll never be able to understand certain programmers' insistence that
imperative code is somehow unnatural or that "we're only used to it because of
momentum" or whatever. For thousands of years people have been issuing
imperative instructions to each other.

"Wash. Rinse. Repeat."

~~~
owl57
> For thousands of years people have been issuing imperative instructions to
> each other.

This ancient culture is pretty strong, you're right. I've lost count of times
I begged people to skip all this unreliable "turn right at the second
intersection, rinse, repeat" and just tell me the street address long before I
learned the word "imperative".

------
vmchale
Not that representing polynomials as linked lists is by any means ideal, but
this "first-guess" program is quite elegant in my opinion.

I know there's a tendency to over-abstract things among the Haskell community,
but if you focus on pure, immutable data structures and algorithms on them
you'll discover a truly wonderful and indeed pleasant and productive language.

------
Pimpus
> “I ought to be able to generalize this,” I say.

lol, it's no wonder he never finishes a program. But this isn't a problem
limited to Haskell - one can start generalizing unnecessarily in any language.

To be fair though, the Haskell ecosystem as described in this article (and
from my experience) is quite frustrating, and it makes me appreciate my
current language Rust that much more.

------
jejones3141
As others have said, go ahead and implement it; you'll have a better idea of
what you might want to change or generalize after you have that version to
experiment with.

(One case you might want to consider: with that way to create polynomials,
won't x^1000000 take a lot of typing?)

------
kakashi19
It’s not the abstraction, it’s the level of abstraction - the higher you go,
the more general and the less concrete it becomes. Striking the balance is the
key.

------
notatcomputer68
This seems like "DFS" programming. I'd advise "BFS". Do it the easy way and
put a "TODO: generalize like so."

------
ww520
Premature generalization. There.

------
M_Bakhtiari
I have this with all languages I get into. I get the urge to radically adopt
whatever patterns seem to be idiomatic for the language, whether that's
sensible or not.

I tend to judge beginner friendliness of a language by how strong that urge
tends to be, and I tend towards Common Lisp since it is very multi-paradigm
and not very opinionated, yet with very little semantic warts that bite the
user in the end. There are easier languages to pick up if you have no concept
of programming (things like HyperTalk or Scratch) but they have obvious
semantic deficiencies.

Haskell on the other hand is an excellent first language for structured
teaching because it's easy to express concepts in, but it stubbornly resists
attempts by the inexperienced or impatient to shotgun or cargo-cult a
solution.

------
patientplatypus
I've always felt of Haskell that it's a way for very smart people to never get
anything done.

I want to like Haskell, I really do, but it's like learning Latin - you'll
feel very smart except no one can talk with you except other people that
correct your grammar.

Bleh.

~~~
whateveracct
I write Haskell professionally and I get plenty done :)

