Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Haskell without lens, text, vector, etc... is a bit like rust with only core not std.

The haskell standard library is tiny. Libraries like lens are not optional. In practice you won't understand any open source Haskell without rudimentary understanding of lens. I get why parser libraries were banned, but excluding lens, vector, and text?

I like Rust a lot, but haskell minus it's more advanced type system is just Rust plus GC. Lets not pretend this is a fair comparison of languages when it's primarily a comparison of standard libraries.




This is why I gave up on Haskell. Lens works as advertised, but is a pain to learn and to use in practice: the abstraction is tough to grasp and it is hard to form an intuition about it. The compilation errors are laughingly esoteric. The number of adhoc squwiggly operators is ridiculous. You also need to understand a lot of language extensions to get how the type checking works.

To me it looks like an impressive proof of concept for a future programming language based around it.

If I were to start a project with Haskell the use of lens would be explicitly forbidden.


It's about as esoteric as somebody learning C++ for the first time. And from that perspective, it's totally normal for errors or syntax to be weird looking for a long time.

Most of us, including myself, are biased towards languages like Java, C, C++, Javascript, because those are what we learn first - and so our expectations of what errors (or syntax) look like are shaped by our early experiences.

So I don't think it's fair to say that Haskell's compiler errors or quirks are fundamentally less intuitive than something that GCC/G++ spits out even on a sunny day. Just odd when we expect errors to look a particular way, but Haskell is playing a totally different (not exactly harder) game.


I didn't say Haskell's error messages are bad. If you stick with explicit types on your functions and no language extension they are absolutely great. I wanted to point out that type checking errors with lens are hard unless you really know how all the different type aliases relate to each other. It was a few years ago so maybe things are better.

C++ also had this problem with the standard containers. However it is much easier to get what is a dictionary compared to a random "optic".


> However it is much easier to get what is a dictionary compared to a random "optic".

This is exactly what I disagree with. We come from a prior understanding of mutable/imperative dictionary/shared_ptr/std::pair, because that's what we started out with.

Had we been initially been trained on monads, functors, lenses, those would be the familiar tools, and we'd go "Huh, that's an... interesting way to write code" when faced with C++ for the first time.


> because that's what we started out with

Yes, but not from programming, but from general life experience. Everyone knows what an actual dictionary is, and even non-programmers can easily grasp how a one-way 'map' works.

Mutation is also how the real world works. If you want to record something, you write it down—you've just mutated the world, not encapsulated your operation in the WorldState monad.

You need to build a pile of mathematical abstractions in your head before you can really get off the ground with lenses. Not everyone has that aptitude or interest.


> Mutation is also how the real world works. If you want to record something, you write it down—you've just mutated the world, not encapsulated your operation in the WorldState monad.

But is it, though? Perhaps you just appended something to the world-log in a purely functional way. Time seems to always go in one direction (at least in my experience, YMMV), kind of like a DB primary key that is monotonously increasing. It really depends on how you look at this.


Haskell has maps.

You 100% do not need to build a "pile of mathematical abstractions in your head" to use lenses. It's a handful of types and functions. Do you need to build a pile of abstractions in your head to use `std::unordered_map` or getters/setters in C++?


Yes to both? C++ is not a simple language by any means.

For context, I was introduced to FP (SML in this case) around the same time I learned Java, and I still think for the vast majority of coders, an imperative map is much easier to grok than lenses.

The former only requires understanding how values are manipulated and mutated. You're going to need to understand this anyway to write software, since your machine is mutating values in memory.

Lenses however require complex type-level reasoning, so now you must learn both the value language and the type-level metalanguage at once, and your language also deliberately obscures the machine model. That might be powerful, but it is still an additional mental model to learn.

I mean, just look at the Haskell wiki reference: https://en.wikibooks.org/wiki/Haskell/Lenses_and_functional_...

The route to understanding them goes through Applicative and Traversals, which means have a solid understanding of typeclasses.


Lenses don't really give you anything that you can't get from (a) a sensible syntax for record updates and (b) intrusive pointers. Lenses only exist because of Haskell 98's uniquely bad support for record types. Record access and update in most other languages just is simpler.


Lenses are more than reified record labels though. There is a hierarchy of concepts that can be freely composed based on which features your data structure actually supports. In particular, lenses can be composed with traversals ("lenses" pointing at multiple fields) yielding "LINQ" like features without introducing new syntax or concepts.

The main problem with lenses is that common lens libraries look extremely complicated at first glance and seem to be solving a very simple problem. That rightfully puts most people off of learning what all the fuss is about.


fmap replying to foldr in a subjective argument about functional lenses... what have I done to deserve this hell.


If you use lens as just a way to access records like you do in other languages, then there is absolutely nothing hard about it. Literally all you need to know is:

Name your records like "data Prefix = Prefix { prefixFieldName :: ... }" call "makeFields ''Prefix" once at the bottom of your file and use "obj ^. fieldName" to access and "obj & fieldName .~ value" to set.

That's it. You now have 100% of the capabilities of record update in any other language. This doesn't get any simpler in any other language. It even pretty much looks like what you would do in other languages.

I'll grant you, Haskell and lens do a terrible job of explaining subsets of functionality that are simple and let you get the job done before jumping in the deep end.


Yeah, so it's a less good way of accessing record fields than the one present in 99% of other programming languages. Your own description makes this plain. Let's compare to Javascript:

* I don't need to import a module to make available the syntax for getting and setting fields of an object.

* I can use the same syntax for any object, and don't have to worry about doing a bunch of code generation via a badly-designed metaprogramming hack.

* I don't have to worry about adding prefixes to all my field names.

* The syntax uses familiar operators that I won't have to look up again on hackage if I stop writing Javascript for a few months.

* No-one modifying my code can get "clever" and use one of ~50 obscure and unnecessary operators to save a couple of lines of code.

What bugs me is when Haskell advocates try to use all the additional esoteric features of the lens library as an excuse for this fundamental baseline crappiness.

Haskell really just needs proper support for record types. Then people could use lenses when they actually need lenses (never?). At the moment, they're using lenses because they want something that looks almost like a sane syntax for record updates.


Record types are not a solution to the problem lens solves. Lens is a good library and a good concept. If we spent some time on it in programming class, most people would get it. When moving to non-Haskell languages, the lack of proper lenses is something I notice almost immediately.


I know what the lens library does - I write Haskell for my day job.

In practice, the main reason people use it is to work around the deficiencies of Haskell's built-in record system:

>I never built fclabels because I wanted people to use my software (maybe just a bit), but I wanted a nice solution for Haskell’s non-composable record labels.(http://fvisser.nl/post/2013/okt/11/why-i-dont-like-the-lens-...)

The other features of lenses don't strike me as particularly useful. YMMV. I'd also question the quality of the library. It's full of junk like e.g. http://hackage.haskell.org/package/lens-4.17.1/docs/src/Cont..., which is just an invitation to write unreadable code.


My biggest use case for lenses that I miss in other languages is the ability to interact with all elements of a collection, or elements in deeply nested collections.

For example, if I had a list of records with a field named 'categories' holding a list of objects with a field named 'tags', and I wanted to get all of these names in one list, without nested loops, lens makes it easy 'record ^.. categories . each . tags . each' or I could update them all, etc. It's just so easy to do this kind of data munging with lens that writing fors, whiles, etc in other languages is painful.


> And from that perspective, it's totally normal for errors or syntax to be weird looking for a long time.

This isn't normal. This is just using a tool that sucks. Those who consider this normal are just masochists.

Rust, elm, etc. have great error messages. That took a lot of time and effort to achieve. The fact that it is impossible to implement a C++ compiler that produces good error message is just proof about how broken the language is. The fact that some people find this normal is just Stockholm syndrom at work.


Not at all. Several languages Rust included takes understandable errors seriously. I am a Rust newbie but the errors are extremely easy to grasp and fix my code.


You say "not at all", but only cite Rust (which I didn't mention). C++ has horiffic error messages, certainly at the level of a bad Haskell error message.

I'd say my point stands pretty well.


So your defense for Haskell's error messages is that they're slightly better than what you get from a massively entrenched language with famously user hostile error messages?

Good luck with that :)


I don't think C++ errors are bad any more. 2019 compilers generally produce very good error messages. The situations where you get into pages of template nonsense in an error are becoming fewer and further between all the time.


C++ has bad error messages because of language design. Contemporary C++ compilers are very good at reporting clear error messages about common mistakes, but template heavy code still yields arcane error messages. Templates are untyped, so there is no way to give sensible error messages when defining or instantiating a template. Instead you have to typecheck after template expansion, at which point you are left with an error message about compiler generated code.

There are some proposals which address this (e.g., concepts), but none of them are part of the language standard yet. Concepts in particular made it into the C++20 draft, but they also made it into a draft of the C++17 standard and were ultimately rejected. Somewhat vexingly C++ concepts actually come with the same problems only at the level of concepts instead of at the level of templates.


Some C++ has horrific messages, new compilers do a much better job at complaining about most errors - some even suggest fixes. I don't remember seeing Haskell doing that.


Haskell does do that. It provides suggestions and alternatives: you probably meant X or you forgot an import to Y or try enabling the Z extension.


You are predicating something on your experience with C++. I am saying it does not apply to other languages.


Rust errors are not all good, some of them are pretty bad like the "inferred type" ones.


We learn those first, that are not developed as a research platform that happens to have a little production use.


Take a look at https://github.com/well-typed/optics.

It's like lens, but with the design goal of being easier to use and producing better error messages.


Same for me, except also the incredibly obtuse set of ~20 compiler pragmas you need in Haskell. If you ask for help to do some simple programming concept, like multiple dispatch based on type at runtime, then from the Haskell community you first get a bunch of tone deaf “you shouldn’t want to ever do that” responses, followed by a huge tome of all the language extensions (fundamentally changing or adding syntax) that you need.


With the exception of very few extensions that I've never seen used in practice, Haskell language extensions are mutually compatible and create a language that is a strict superset of the old language. In this sense, I'm not sure how they're much different than the --c++=14 flag in GCC.


If you need to know and understand syntax implications on highly generic type pattern constructs coming from a dozen external pragmas, just to be able to read the code then it’s a severe language design problem.


That's a total stretch, lens is not used in GHC for example and lots of other smaller compilers written in Haskell. It is used in Ermine but that is stuck in a semi complete state for a while now and Ekmett has moved on.


I second this.

I’ve written tens of thousands of lines of Haskell, and I’ve never used lens. Also, putting it in the same category as text and vector doesn’t make sense — these are indeed unavoidable, and practically all my projects use them.


Thirded. No lens in pandoc (50k lines of haskell), darcs (40k), most hledger packages (15k).


I disagree about lens. My new projects don't use them in main code-base and it was a great decision:

- TAGS work like a charm to access field definitions

- compile times are ok

Of course, if library's API needs lens, they're used.


What do you mean with TAGS?


File named TAGS generated from hasktags (in case of Haskell) that gives you an easy way to "jump to definition" from Emacs or other editors. Good way to navigate codebases even if you don't know how to build them.


Presumably etags/gtags/hasktags etc., ie. having built a TAGS database for such a helper program, you can use it in an editor to jump from a field name to its definition. That wouldn't be the case with a lens accessor.


GHC does not use lens and it is, it seems, ok.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: