I think that’s what many experienced Haskellers would say is the language’s best attribute for getting things done, that the type system makes it possible to refactor even a large program with the confidence that all the parts you replace will slot perfectly back into the original structure. Or that changing the core structure itself will result in a new structure that has all the right slots for all the various bits and pieces that need to slot into it. Having the confidence that you will be able to refactor painlessly, you should be less concerned with finding the perfect abstraction up front. Write code, make it work, then make it better.
And as others have mentioned, yes, the right abstraction and appropriate level of generality will become easier to recognize as you write more Haskell. Go with the first decent implementation you can come up with, and as you gain experience that first implementation will more and more often turn out to be a good one. In the meantime, run HLint and read more Haskell code, and you’ll quickly pick up most of the generalizations that really make sense to use in typical applications. The more experienced you get, the more confident you should become that significant time spent generalizing code to no real purpose is pointless and tends to result in code that both reads worse and runs worse than what you started with.
My favorite languages are, for this reason, multi-paradigm: Common Lisp, Mozart/Oz, Scala and C++. It's a bit like building La Sagrada Familia (and that's why it's depicted in the cover of CTM). If you want a superb solution, you end up using many styles like Gaudí did.
But I reckon the future will move towards more provably correct solutions, and we will be using things closer to e.g. Idris. Hopefully that's orthogonal with homoiconicity.
That’s probably why many game programmers are more pragmatic in their programming practices: they have a concrete deadline to pursue, with a predictable subset of hardware for their program to run...
In general I think software is more like a house than an art piece - it keeps adapting as long as people use it.
Elm would be so awesome if they just forked it at 0.19 and said "no more breaking releases". But I doubt that will ever happen.
Seems like the opposite of a good time to release 1.0.
Of course, this means most people won't have an appetite for unstable Elm, but that's nothing new. Complaining that something is still 0.x unstable seems odd to me beyond the selfish "I wish I had it sooner."
A lot of HN's comments around Elm remind me of when I was a kid absolutely livid at Bungie for not releasing Halo 2 sooner. Doesn't Bungie know my summer just started? I don't want a fucking beta, Bungie! Me want game now! Bungie must be incompetent because everyone on my gaming forum wants it now as well!
The best advice, as you stated it, is to go with the first instinct, and then refine it, IF needed. It might turn out that the code you wrote was redundant anyway.
'There's something very seductive about languages like Rust or Scala or Haskell or even C++. These languages whisper in our ears "you are brilliant and here's a blank canvas where you can design the most perfect abstraction the world has ever seen.'
Although it's not 100% applicable in this case (unless you argue that the cost is to your own time) - I think the sentiment is perfect.
The reason so many programmers struggle with this problem is because nobody is paying them to write CRUD applications in Haskell. If they were, they'd find it more than adequate to the task and the job would be easy and painless.
People only go down rabbit holes when they don't have a manager breathing down their neck all day. It takes real discipline to create really good hobby projects on your own, regardless of language.
I get paid to write applications in Scala that are often mostly CRUD. The temptation to overengineer is still there and still constantly needs to be fought.
I've seen the same thing happen in plenty of Java codebases too, mind. There it tends to be less "make this work for any possible Monoid" and more "make it work for any imaginable form that the user might want to build", but I think it's another facet of the same issue.
I agree, but perhaps discipline is not necessarily complete restraint... I think some rabbit holes are worth exploring, they can separate outstanding projects from mediocrity. Perhaps it's about choosing the right rabbit holes in the right projects, another way of achieving that is minimalism where you limit the scope and not the depth.
not specifically directed to the parent comment, but it lead me to write this:
I think the discussion here is too focused on business/productivity. I feel like different people have different approaches to coding. Some of us really need to write beautiful code, and we are ok with "wasting" a lot of time thinking about fluff until we recognise that it really is fluff, or come up with interesting ideas simply because we spent time thinking about something. Maybe you care about the end result, maybe you care about the code, maybe you care about both. And even when you care about one or another, you do it in different ways. We start optimizing algorithms, then we build frameworks and "optimize" workflows, and then we want to optimize the approach to coding. But I don't think that's universally doable as with algorithms. We end with great insights and confused discussions.
Ultimately I want to be able to tell the world that my work had a positive impact in some form. I do not want to be the guy who did useless things. This is my main motivation these days.
I've experienced this first hand. If I can't find a basket of Easter eggs within the 30 minutes of my journey into a rabbit hole, I have to backtrack and move on to something else. I do not feel this limitation when I'm at home, doing things on my own volition.
If you pre-suppose that describing types and finding appropriate abstractions aren't "writing applications", that is, they offer no value in the process, then spending time doing them is of course solving neat little puzzles to no benefit.
On the other hand, if you pre-suppose that types and abstractions offer some value to the process of writing an application, then solving those puzzles is adding value.
Calling types and abstractions a "neat little puzzle" is a diminution we could apply to other aspects of writing an application: implementing an algorithm to correctly process some data is a neat little puzzle presented by our test cases.
Rich Hickey would need to demonstrate the lack of value of types and abstractions for producing applications to make a solid argument here, and that I believe is a steep uphill battle, because Clojure would be a pretty terrible language if you took out the ISeq abstraction, and there's an awful lot of effort spent shifting the puzzle of "is this function being used right" from the type system game engine to the test harness game engine.
One of many things that I love about the TypeScript type system is that it gives so much power to the author. It is the most expressive industrial type system I know of, but it also makes it easy for the programmer to say "I can't prove that this is true, just trust me that it is".
A nice side benefit is that this is also practice, so you can also gain skill. This is how I justify sometimes spending much more time with the type system than it would otherwise be worth when learning a new language or hacking on a personal project.
I really wanted to like Typescript, but once you're thinking in HKT it's incredibly frustrating to have to manually translate a function into its flattened expansion (just like it's incredibly frustrating to use a type system without generics once you've used one that has generics). Every serious industrial language allows a programmer to say "I can't prove that this is true, just trust me that it is"; I suspect many people who struggle to start out in Haskell would do well to make a little more use of unsafeCoerce and unsafePerformIO (they would no doubt give themselves runtime errors, but sometimes the easiest way to understand why you had a type error is to run the code and see what the values are at runtime).
(I made a small hobby tool with ScalaJS and was amazed how easy it was, so I'll be advocating for that over Typescript).
It's seductive until you start getting excited more about higher order reasoning about algorithms and about what programs really accomplish and less about just showing off. A breakthrough program written in some uncool language like VB.NET is more impressive to me than yet another ________ written in Haskell or Rust with perfect abstractions.
Do they? I think the Haskell community online is pretty crappy for this and other reasons, but GHC/Haskell are at their best when you actually use them to write programs instead of doing things like this.
Reading it back it sounds analogous to the classic saying of one can write fortran in any language. In my opinion having experience with Haskell gives you a mindset first and foremost. When you bump into a problem where this mindset is a good fit you can use that knowledge with whatever tools are at hand.
Nowadays I have a strong preference for simple and focused languages that use a small number of patterns that can be applied to a wide range of problems. That goes a long way in avoiding the analysis paralysis problem.
Often when I write something in Haskell I have this feeling. It feels satisfying to build up nice types and constructs but I don't know if it at all pays off in any objective or empiric sense. I can't really tell if I've invented the problem that I just solved.
Priceless observation, nicely done!
I was quite inspired by Rich Hickey's Simple Made Easy talk when I listened to it a last year. I think that's the one you're referring to. Excellent food for thought in that talk.
Sticking to Haskell 2010 with a few extensions that make known behavior more consistent (GADTs, NoMonomorphismRestriction, and a few others in that vein) is the best bang for buck in my experience.
Brainfuck or Whitespace are two minuscule languages that produce the most impenetrable sources.
From a great talk by Simon Peyton Jones https://youtu.be/re96UgMk6GQ
It doesn't seem any more reasonable to use them to declare Haskell a complex language than it would to have the existence of, say, a linear algebra library providing mathematical operators for Forth mean that Forth is a complex language.
Is the STL part of C++? Sure, it's a library, but it's a standard library - it should be there in every conforming implementation. Personally, I think of that as part of the language.
Is some vendor's RS232-port-handling library part of C++? I would say no.
In the same way, I think that Java's standard library is part of the language, and is in fact the strongest selling point of Java.
Then please do as the professor does when it comes to production code.
On the other hand, with the rise of stuff like OpenTracing and structured logging I think the Haskell community was pretty prescient about treating logging as an explicit side effect.
For the most part I admire the Clojure community's focus on data and wariness of higher order abstractions. Whether it be classes, typeclasses, or higher order functions, if you can express it with just data it's almost always better and most communities would do well to remember that.
On the flip side when there is a need for higher order abstractions, I sometimes find Clojure's standard tools to be lacking. Speaking of monadic structure being there whether you use them or not, transducers are a great example. I find them overcomplicated in Clojure, which I think is due to focusing too heavily on using standard function composition to compose them. I regard this as a bit of a trick or a coincidence. When's the last time you composed something with a transducer that wasn't just another transducer as opposed to an arbitrary function? In fact if you instead use monadic composition (i.e. compose `a -> m b` with `b -> m c` to get `a -> m c`, in this case `m` is just `List`) you'll find that transducers are just functions of the form `a -> List b` rather than higher order functions. And yes that `List` remains there even though transducers work on things that aren't just concrete collections.
I've been meaning to try to push out a Clojure library that shows this but haven't gotten around to it. Maybe this thread will be the kick I need.
For future reference, this isn’t a good argument for trolling Haskellers.
Many would consider that a feature, but if you want to #yolo it anyway, like in most other languages, just use trace in prod and call it a day.
For example, you can specify what should happen is IO fails in your logging configuration. This handles the exceptional case consistently and in a single place without forcing you to structure your whole app around it.
What's more is that there really isn't a sane way to recover from such a catastrophic failure. If your database goes down, or you lose a disk, the only thing you can do is shut down the app. It's not like it's gonna keep humming along with the logging failing silently.
This kind of hyperbole that your either solve all problems via the type system or #yolo is precisely what makes Haskell community so toxic in my opinion.
In Haskell you can write a function that does "unsafe" logging and handles exceptions if that's the behaviour you want. You can even make that function change its behaviour based on a config file if you really want to.
> What's more is that there really isn't a sane way to recover from such a catastrophic failure. If your database goes down, or you lose a disk, the only thing you can do is shut down the app. It's not like it's gonna keep humming along with the logging failing silently.
That sounds like an argument for including the IO effect in your whole program's structure like people usually do in Haskell, no? If you change a function that doesn't access the disk (e.g. something that just grinds out a mathematical computation) into one that does access the disk by adding logging, you have a new set of possible failure scenarios to be aware of and want that to be visible.
I appreciate that this all sounds very theoretical but it can easily lead to real-world failures. I've seen "impossible" control flow because of an unanticipated exception in a function that didn't look like it could exception cause production issues. I can easily imagine e.g. leaving data in a remote datastore like redis in a supposedly impossible state because your redis-error-handling code tried to first log that an error had occurred and that logging then failed because local disk was full.
> This kind of hyperbole that your either solve all problems via the type system or #yolo is precisely what makes Haskell community so toxic in my opinion.
Every time I've bypassed the type system I've come to regret it, usually when it caused a production issue. It's not hyperbole, it's bitter experience.
If dynamic typing was problematic we would've switched back a long time ago.
The comment above where pka snidely claims that using any alternative to types amounts to yolo is quite representative. He outright dismisses that any valid alternatives are possible, and he indirectly claims that people using other methods are being unprofessional and are cutting corners. That amounts to toxic behavior in my opinion.
That's an interesting thing to say, seeing how representative figures of (specifically) the Clojure community get really defensive really quickly once somebody questions the effectiveness of dynamic typing - which I specifically didn't do.
> He outright dismisses that any valid alternatives are possible
I was merely dismissing your incorrect claim about logging in Haskell. And yes, not taking advantage of the type system when you are already programming in a language with a type system amounts to #yoloing it, in my opinion. Doing the same in other languages is fine, since you don't have another option really.
You've already conceded that your solution is not appropriate for production, yet you keep saying the claim is incorrect. You can't have it both ways I'm afraid, and it's the definition of being defensive. You can't even acknowledge that your preferred approach to dealing with side effects has any drawbacks to it.
Meanwhile, the opposite of what you're claiming is the case in practice. Other languages allow you to use monads to encode side effects if you wanted to, but Haskell is the language that doesn't give you other options.
I’ve done no such thing.
> You can't even acknowledge that your preferred approach to dealing with side effects has any drawbacks to it.
It does, probably not the drawbacks you think of though (“conceptual overhead of types”?).
Generally, you seem to be awfully incompetent in a language you claim to have used for a year. That’s not a bad thing, but you don’t present your arguments with a big fat disclaimer stating that. If you did, I think there would be much less tension in these kind of discussions.
>These can be useful for investigating bugs or performance problems. They should not be used in production code.
This is perfectly inline with my original claim.
The drawback is that you're forced to structure your app around types, and this can lead to code that's harder for humans to understand the intent of. The fact that code is self consistent isn't all that interesting in practice. What you actually want to know is that the code is doing what was intended. Types only help with that tangentially as they're not a good tool for providing a semantic specification. If you don't understand that then perhaps you're the one who is awfully incompetent in this language.
>Generally, you seem to be awfully incompetent in a language you claim to have used for a year.
I disagree with the philosophy of the language because I have not found it to provide the benefits that the adherents ascribe, and I gave you concrete examples of the problems it introduces. I'm also not sure what you're judging my competence in it on exactly as you've likely never seen a single line of code that I've written in it. Perhaps if you stopped assuming things about other people's competence based on your preconceptions there would also be much less tension in these kind of discussions.
And I don’t need to. I (and anyone proficient in Haskell, really) can infer your competence with somewhat reasonable confidence based on your comments, like this one.
People who’ve just read LYAH are able to contribute to production codebases already, no problem, but they still may not have even understood basic concepts, such as functors or monads (this is firsthand experience from work), let alone monad transformers, arrows or lenses - which I consider to be a good thing. Based on your comments (and not only this thread here), this is where I’d place you in terms of competency, but of course you are more than welcome to correct me.
And this is the most hilarious aspect of Haskell community. You assume that the only reason people might not like the approach is due to their sheer ignorance of the wonders of the type system.
>People who’ve just read LYAH are able to contribute to production codebases already, no problem, but they still may not have even understood basic concepts, such as functors or monads (this is firsthand experience from work), let alone monad transformers, arrows or lenses - which I consider to be a good thing. Based on your comments (and not only this thread here), this is where I’d place you in terms of competency, but of course you are more than welcome to correct me.
Nowhere did I state that I have trouble doing any of those things. What I said is that I have not found any tangible advantage from doing it. I found that this approach results in code that's less direct and thus harder to understand. This is similar problem that lots of Java enterprise projects have where they overuse design patterns.
My experience tells me that the code should be primarily written for human readability. Haskell forces you to write code for the benefit of the type checker first and foremost.
No, I don’t, and when I happen upon somebody who is competent in Haskell and still prefers Clojure/Erlang/whatever then genuinely interesting conversations tend to happen.
This is not the case here though.
Have a good day.
> People who claim dynamic type systems are superior to static ones can be dismissed in much the same way flat earthers can because both choose to dismiss evidence they find inconvenient.
Seriously. This only feeds into his narrative.
That being said I don't think the "well just let it blow up" is a good one either. It works in a certain subset of cases (when you know that your program can only do certain things and you have a useful supervisor such as in Erlang).
I think that philosophy is part of why Clojure has historically had problems with error handling especially in its own toolchain (e.g. its rather cryptic error messages) and still doesn't really have good tooling or patterns for it (e.g. I think nil punning is a dangerous pattern, throwing useful and specific exceptions feels unidiomatic with the need for gen-class, and it doesn't seem like other solutions have garned much mind share).
Spec is gaining momentum, but seems to still be spewing mysterious messages that require some familiarity to decipher in some cases (although last I checked things seemed to be on the upswing and I'm quite out of date with Clojure at this point).
Even when your app fails you still want to do leave a human readable message and then e.g. log metrics on what kind of error it was and metadata about the error to some supervisor service rather than just let whatever the deepest exception was filter through. "Our authorization request to the database failed with an authorization error and this is the metadata" saves a lot of time compared to an NPE, especially when you're the maintainer and not the writer of the code in question.
The Elm community I think is the gold standard of taking the error path as seriously as the success path when it comes to their tooling and it really pays in my experience when I use it.
When it comes to error messages, I would argue that Haskell ones are no better than Clojure. If anything they're often even less useful because of how generic they tend to be. All you'll know is that A didn't match B somewhere, and figuring out why is an exercise left to the reader.
Personally, I like nil punning and I find it works perfectly fine in practice. My view is that data validation should happen at the edges of the application, instead of being sprinkled all over business logic. If you know the shape of the data, then you can safely work with it.
The idea of doing validation at the edges also applies to functions, if nil has semantic meaning then the function should handle it before doing any nil punning. If it doesn't then it should be safe to let it bubble to a place where it does.
The idiomatic solution for throwing errors in Clojure is to use ex-info and ex-data as seen here https://clojuredocs.org/clojure.core/ex-info
Spec errors are not really meant for human consumption, but there are libraries such as expound https://github.com/bhb/expound that produce human readable output. My team is using it currently, and we're very happy with it. I do think that more could be done in the core however, and it does appear that 1.10 will make some improvements in that department.
In general, the impression I have is that Clojure errors aren't poor due to technical reasons, but simply because the core team hasn't considered them to be a priority until recently.
I haven't found a good way of switching on ex-info generated exceptions. I usually end up having to do string matching or key searching both of which feel brittle. There's ways of following patterns within the boundaries of my own codebase (e.g. a custom key I know will always be there), but that doesn't play well with the ecosystem at large because there aren't well established conventions around what's what. I don't want to come off as saying there's a fundamental reason why Clojure couldn't have better error handling. It's not a language level thing but rather a community thing. I think Python is a good example of where the community has coalesced around using specific error types even though it's a dynamic language.
I totally agree with doing validation at the edges of the program and try to enforce that in whatever language I'm writing in. I think this is a common misconception about statically typed FP that shows up in e.g. Rich Hickey's talk about types. It's quite rare for things like `Maybe` or `Either` to actually show up in your data structure (e.g. Rich Hickey's SSN example). You usually end up bubbling the error to the top of your module and deal with it at a module level. The type is just there to make sure you don't forget to deal with the error (which is the biggest thing I miss when I'm in languages which emphasize open world assumptions and don't give good tooling to create closed world assumptions; I want to know if I've handled all my errors and all states of my application!).
Yeah the problem is that I've found in the legacy Clojure codebases I've maintained it's rare that nil ever has a unique meaning and often ends up getting reused for a lot of different meanings. For example "A key doesn't exist in this map" and "This data is of a completely different shape than I expected" are different error conditions with different errors that usually both turn nil in the ecosystem.
This lispcast article really hit home for me and sums up some of the pain points I hit using Clojure in production: https://lispcast.com/clojure-error-messages-accidental/. You eventually internalize the compiler errors so they're not a huge deal but the ecosystem at large doesn't have a great story for runtime errors.
I'd really like to see something along the lines of dialyzer for Clojure This talk proposes a good approach for that I think https://www.youtube.com/watch?v=RvHYr79RxrQ
It would be great to have a linting tool that finds obvious errors, and informs you about them at compile time. For me that would be an acceptable compromise.
Overall, I would say that it does take more discipline to write clean code in a dynamic language. The problems you describe with legacy codebases are quite familiar. I've made my share of messes in the past, but I also find that was a useful learning experience for me. I'm now much better at recognizing patterns that will get me into trouble and avoiding them.
My limited experience with static analyzers is that you also end up with unpredictable breakages of the form "hmmm... so if I leave the variable here my static checker tells me I'm wrong, but as soon as I move the variable down one level of scope it just silently fails to see the error."
Regardless I haven't used Dialyzer myself and I have a good idea of the very very finite number of minutes I've spent with Erlang proper (as opposed to just reading about it), so who knows. It'll be fun to see where this goes. Thanks for the link!
The really big innovation I'm personally waiting for is combining static types with image-based programming (e.g. Clojure's REPL) which seems like an open problem right now because it's pretty difficult to think about what static invariants can and can't be maintained when you can hot reload arbitrary code. That and better support for type-driven programming a la Idris. Working with the compiler in a pull-and-push method (which you can get a crude approximation of in Haskell with type holes and Hoogle and some program synthesis tools, but oh man if even the toolchain for that was mature that would be huge!) was as big a revelation for me as REPL-based programming in Clojure was.
Personally, I would find this very valuable because it would help catch many common errors early while staying completely out of the way.
And yeah, I can't really do development without the REPL anymore. I find the REPL makes the whole experience a lot more enjoyable and engaging than the compile and test cycle. It's really a shame that most languages still don't provide this workflow.
Haha, well that's where you and I differ. The REPL is amazing, but I still want my ability to create closed world assumptions first!
I've read that the author is looking at improving inference in it, and at generating types from Spec, so it might still find a niche after all.
And I understand completely, it's all about perceived pain points at the end of the day, and we all optimize for different things based on our experience and the domain we're working in. That's why it's nice to have lots of different languages that fit the way different people think. :)
I like having to do logging the Haskell way, actually. it allows for more coherent and easier reuse. What if want to reuse some code, but do logging in a specific way? By allowing the caller to decide where logging goes you can readily accomplish this.
Your "pure" function can become "pure except for logging", "pure except for analytics calls", "pure except for notifications", or whatever you decide.
Haskell is a simple and focused language, provided you use GHC with minimal extensions.
If you want an analogy, consider this like studying jazz or something. Sure, you could just notice a II V I progression and call it done, but if you pick away at each individual note, you can find a whole lot more going on behinds the scenes.
Basically, I don't really consider what's happening in the blog post a bad thing. It just has a time and a place, and you need to be aware when it's the wrong time.
Though it's true that Haskell is easy to put you into a mindset where you want to simplify and generalize the code as much as possible, leading to wasted time on overly general solutions. Which shouldn't be the case, because e.g. if you want to extend the solution from lists to traversables, Haskell gives you the confidence to safely refactor the method at a later time.
Other than strings desperately needing to be purged from the library, what are you talking about?
Haskell has some really solid standard libraries, and it's extended library set has some of the most sophisticated algorithms packages in the world.
I also don't think you need Conduit or Pipes or any other performance destroying free Monad libraries to deal with it. My hot take: Conduit is in fact awful and radically overused and multiple superior options exist.
If you want to do that sort of thing with Haskell, I would suggest switching to the numeric prelude 
Not counting obscure special-purpose langs like Coq.
I don't think this is good advice in the context of Haskell. Haskell allows some abstractions that aren't just "black boxes" or glorified templates. It's of kind like elementary logic: the more models there are, the fewer proofs there are, and vice versa. Analogously, when you write a more abstract function, there are fewer ways you can manipulate it and thus it is in some sense simpler.
Eq a => a -> MyObj a -> Maybe Foo
class HasFoo o where
getFoo :: Eq a => a -> o -> Maybe Foo
instance HasFoo MyObj where ...
The fact that fixed-length lists aren't working well as a representation for polynomials is a hint. Polynomials with real coefficients form a vector space , so you should really think of them as infinite-dimensional lists of numbers (in which most of the numbers are zero).
Once you know you want to represent an infinite dimensional vector with only a few nonzero entries, you can use a sparse vector. The first library that comes up when you google "Haskell sparse vector" is `Math.LinearAlgebra.Sparse.Vector`, which lets you write something like this (I haven't run this code but it should get the job done):
import Math.LinearAlgebra.Sparse.Vector as V
poly1 = V.sparseList [1, -3, 0, 1]
poly2 = V.sparseList [3, 3]
sumPolys = V.unionVecsWith (+)
So, I read this more as an article about trying to reinvent the wheel in a domain which isn't necessarily simple, which isn't a good idea in any language.
Predicting the future is very hard; remembering the past is much easier. If you find yourself typing the exact same pattern for the Nth time, then it's time to refactor it into a macro or a function as appropriate.
Figuring out what parts of the next 1000 lines of code you are going to write will benefit from an abstraction (and which abstraction that is) is a rare skill that comes only (if at all) with experience.
I honestly don't have time left in my life for such meaningless drudgery.
With a type system I have the computer aid me in designing the program. It keeps me honest and ensures that I don't have type errors which are are huge class of things I'd rather not have to think too hard about.
When I program in Haskell I spend more time solving problems than fixing programming errors.
Conversely, languages like Java, C++ and Python (if you use classes), make it very easy to write simple things without abstraction, but virtually all use of abstraction goes off the rails immediately and everything shoots you in the foot, so that good abstraction is not even really a thing at all.
Pick your poison!
I think haskell is actually fine at writing code without abstraction, it's just that people don't choose haskell to go down that path, and so aren't satisfied with those "simplistic" solutions.
I could use the older STL iterators, but nooo I use a range, with a lambda. Returning a tuple with tie and pair and some more stuff from the latest C++1x standard.
And in Linux userland, I just want to check a pid but I invent some smart locking system to ensure I never get a false positive or negative.
Other kinds of "stuff from the latest C++1x standard" tend to consist of relatively harmless novel ways to express something, usually not intrinsically complex and, when inappropriate, causing only localized damage (typically puzzling syntax or slightly wrong declarations with no impact on unedited code parts, even in the same class or function).
Crazy, I know, but especially when were doing labor in industry, even without maximal generality your code is probably going to outlive its patron corporation and then die in obscurity.
This fosters an environment that might already lead people to second guess themselves, because it's so big and new.
Calling Haskell an anti-PHP seems fair.
It's the land of mediocritity.
I've never understood this. Unless you write a library that you plan to publish, or already have actual cases where you need a more general solution, why spending time trying to generalise code instead of switching to the next task?
Of course, all of this is purely speculation on my part.
If I allowed, without blinking, my perfectionist self would ditch every language but Haskell. No other mainstream language can give you more control and purity. For a perfectionist this is opium.
Meanwhile, if you look at pseudo-code written by actual mathematicians or logicians, it's almost always imperative, full of side-effects and global variables. Sometimes they even use GOTO!
The function `intMap :: (Int -> Int) -> [Int] -> [Int]` can do all sorts of crazy things that are not map. The function `map :: (a -> b) -> [a] -> [b]` can do far fewer crazy things, and just from looking at the type you can say that any `b` in the result list _must_ have come from applying the function to some `a` in the input list.
Morally correct... but consider the function `\f xs -> [undefined]`, which can be typed as `(a -> b) -> [a] -> [b]`. (Obviously it could be given other types as well.)
Interested readers should check out Agda and Theorems for free!
> any `b` in the result list _must_ have come from applying the function to some `a` in the input list.
Since there exists no a in , the quoted statement holds! I find that really beautiful :)
I'll have to give Theorems for Free a read, thanks for the suggestion!
There are many cases where it's worth it, so much so that it's worth at least considering whether a more generic solution is better.
I think the problem is that it's hard to predict how deep a rabbit hole like this gets - so you think it's just a few minutes extra work, but it ends up completely derailing the project.
Also, it's not a bad way to learn the ins and outs of the language, really.
The general principle does come in handy elsewhere, though. Doing the most useful work with the minimum power is a generally useful skill. I get a lot of mileage out of it in other languages, because across a couple hundred modules, the difference between modules that have minimum dependencies and modules that carelessly overuse power becomes quite substantially different in character.
There's something wrong with that logic, but I'm too lazy to work out the proof in the general case.
As long as we're playing "make baseless assertions at each other", my experience says otherwise. That's going to be hard for you to talk me out of.
Programmers still seem to have this bizarre belief that programming, uniquely among all the skills in the world, is not possible to improve via deliberative practice, and an equally bizarre belief that improved skills can't possible translate into better programs or even worse, must inevitably translate into worse programs. I can't wrap my mind around it.
I don't deny that there certainly is a trap where you learn these skills in a sort of greenhouse environment, then fail to translate them out of that environment properly. But that's the fault of the practitioner, not the skills, and I prove by demonstration that the skills can come out of the greenhouse and improve real code. I mostly write my professional code in Go, so you can be quite assured my production code is anything but a mess of functional paradigms inappropriately applied, since that's basically impossible in Go. I find what I learned in Haskell to be incredibly useful in Go.
> you can be quite assured my production code is anything but a mess of functional paradigms inappropriately applied
The point of the article was precisely that functional paradigms appropriately (which is to say, "appropriately") applied are themselves a design smell.
Poly [1, -3, 0, 1]
Poly [1, 0, -3, 1]
v  => 0
v x:xs => x + v * eval v xs
You either need to use foldr to defer the multiply-and-add until the end of the list is processed, or reverse the list before processing it.
(Poly a) + (Poly b) = Poly $ addup a b where
addup  b = b
addup a  = a
addup (a:as) (b:bs) = (a+b):(addup as bs)
But doesn't this implementation add list elements left-to-right, so we'd end up with the result [11, 2] instead of [1, 12]?
> The polynomial x^3 −3x +1 is represented as Poly [1, -3, 0, 1]
It starts with the 0th coefficient and goes up. So adding `x+2` and `10` would be zip-adding lists [2,1] and [10,0].
> It starts with the 0th coefficient and goes up.
Is a list in Haskell canonically written in the opposite order of what I expect? I expect that the 0'th element of the list [a, b, c] is `a`. In Haskell, is it `c`? Assuming `a` is the 0th element, then the coefficient for the highest-degree term is the 0th element of the Poly. And since the degree of the two polynomials doesn't necessarily match, matching up the two highest-degree coefficients and adding them is obviously wrong.
Or am I going crazy here?
module Main where
newtype Poly a = Poly [a] deriving (Eq, Show)
instance Num a => Num (Poly a) where
Poly a + Poly b = Poly $ addup a b
addup  b = b
addup a  = a
addup (a:as) (b:bs) = (a+b):(addup as bs)
main :: IO ()
main = print (Poly [1, 2] + Poly )
reverse (addup (reverse a) (reverse b))
Sometimes it's more productive when you join a team that has already decided on its standards. You don't learn as much, though.
at my company we're giving new candidates a live coding interview where they get one hour to write a very simple application using either Java or Scala. Candidates are free to choose between those languages.
The funny thing is, that candidates who choose Scala are never able to fully finish the assignment. Even though the application is really simple, many don't finish half of it and some even get completely stuck in complex for-comprehensions and what not.
Candidates who choose Java however mostly are able to finish the assignment. The code might not always be the most elegant, but it does what it is supposed to do.
Even though I like Scala a lot, I feel it has the downside that it gives you too many options to do the same thing. This can get in the way when you are simply trying to implement some basic business feature.
(Since in general a particular piece of code is read far more often than it is written).
I think it is definitely possible to write Scala code that is easier to read than the same Java code.
However it seems that a large part of the Scala community does not see this as their main objective when writing code.
Sometimes the focus seems to be on writing code as terse as possible, which is not the same as readable.
Or the focus is on making code more generic and abstract, which can be a useful goal depending on the use case, but it's definitely not the same as readable.
Also, as with code that's elegant, or well-tested, or specified really well, aside from all those explicit qualities, all of these are also just additional touch moments for the author in question to discover and fix the bugs before it gets handed off, and to become my problem on some future date.
(Not saying you should gold-plate it into elegant code. Just saying that there is value that should not so easily be dismissed.)
"Don't engineer things to failure."
When you find yourself saying "I ought to be able to generalize this", that's when you need to stop and just write the code you need to write NOW. Just because you can, doesn't mean you should.
And a function that works only with polynomials will be easier to read.
Any major language has dozens of libraries that do almost-but-not-quite what you want. Any language with support for mactos, templates, or generics offers opportunities to write unnecessarily generic code. (The authors of Spring managed to get into that tarpit even before Java introduced generics!)
With Go, e.g. I would not spend much time on this, because it is such a plain language. You just accept there is no fancy way of doing this and move on.
With C++ I found it a bit different. Either I mess around thinking there MUST be some way of solving this annoying problem only to realize there just isn't. Other times I find a solution but it tends to turn into a horrible ugly syntax mess, so you abandon it. I've pretty much given up writing anything elegant or fancy in C++. It just turns to shit so quickly.
Swift I found fairly straight forward to write. All the typical stuff that get me stuck in C++, never seems to pose a problem for me in Swift.
Julia is the language which ought to have thrown me into the Haskell problem described, as you can do a lot of crazy stuff with types and macros. I do to some degree but mostly Julia just does what I want it to do. I guess it is partly down to the core functions being well designed and that despite the flexibility it is still not as magical as Haskell, Clojure etc.
I tend to judge beginner friendliness of a language by how strong that urge tends to be, and I tend towards Common Lisp since it is very multi-paradigm and not very opinionated, yet with very little semantic warts that bite the user in the end. There are easier languages to pick up if you have no concept of programming (things like HyperTalk or Scratch) but they have obvious semantic deficiencies.
Haskell on the other hand is an excellent first language for structured teaching because it's easy to express concepts in, but it stubbornly resists attempts by the inexperienced or impatient to shotgun or cargo-cult a solution.
This is one area where TDD really shines. You write a simple failing test and implement something that makes that test pass. If that isn't sufficiently generic you write another test, make both of them pass, and refactor until the solution is as simple as possible while the tests pass. Repeat to get to where you need to go.
I'll never be able to understand certain programmers' insistence that imperative code is somehow unnatural or that "we're only used to it because of momentum" or whatever. For thousands of years people have been issuing imperative instructions to each other.
"Wash. Rinse. Repeat."
This ancient culture is pretty strong, you're right. I've lost count of times I begged people to skip all this unreliable "turn right at the second intersection, rinse, repeat" and just tell me the street address long before I learned the word "imperative".
Can you expand on that? Sounds intriguing, but I'm not sure what you mean.
I know there's a tendency to over-abstract things among the Haskell community, but if you focus on pure, immutable data structures and algorithms on them you'll discover a truly wonderful and indeed pleasant and productive language.
lol, it's no wonder he never finishes a program. But this isn't a problem limited to Haskell - one can start generalizing unnecessarily in any language.
To be fair though, the Haskell ecosystem as described in this article (and from my experience) is quite frustrating, and it makes me appreciate my current language Rust that much more.
(One case you might want to consider: with that way to create polynomials, won't x^1000000 take a lot of typing?)
I want to like Haskell, I really do, but it's like learning Latin - you'll feel very smart except no one can talk with you except other people that correct your grammar.
Also, a compiler -- which is a pretty huge difference.