Haskell foot-guns lenses into something awful by doing what so many FP libraries do: over generalizing themselves into impenetrable category theoretic self-indulgence. Why do I need to understand Profunctors to get a value inside of a record?!?
Lenses themselves are awesome. The "bad", naive, not-general type is really simple.
```
newtype Lens a b = Lens {get :: (a -> b), set :: (a -> b -> a)}
```
So simple that modifiers can be mechanically generated for your data types. And suddenly, you can express super-duper complex modifications without all the obnoxious boilerplate it'd otherwise take.
That lens is completely useless. If you are equivocating lenses to struct accessors, then you're adding abstractions for nothing. Lenses are good for things like, extracting all unique values out of deeply nested data structures for example.
Say you have two data structures
data Struct1 = Struct1 { someStuff: [ Struct2 ] }
data Struct2 = Struct2 { otherStuff: HM.HashMap Text Struct3 }
data Struct3 = Struct3 { name: Text }
And you have a list `[Struct1]` and want a `[Text]` of the `names` of Struct3. How do you do this in an imperative language? Manually accessing the fields, for loops, etc.
The van Laarhoven formulation of lens and ekmett's lens library optiocs in Haskell makes this trivial.
`myList ^.. someStuff . each . otherStuff . each . name`
Now, what if you wanted to modify all the names and prefix them with 'id-'? Again.. trivial.
`myList # someStuff . each . otherStuff . each . name %~ ("id-" <>)`
Your formulation is just field accessors. Not composable, and ultimately no better than just using the generated field accessors haskell provides.
One thing that does always strike me about such libraries is that they are often relatively easy to use for common scenarios like that, and the mental model is really not so complicated or abstract as to require category theory or something like that. When you use such libraries, I don't think you are really doing deeply category-theoretic thinking. More so just seeing how the types fit together and maybe thinking of some higher-order functions that you want to combine and apply.
If you take Haskell and remove all the types (e.g. Scheme or JavaScript), you can have all the same abstractions, with basically the same syntax and operational behavior. But then of course the operational behavior is just applying/composing functions in the end... And because there are no higher-kinded types to deal with, category-theoretic language is less likely to sneak in.
I say this as someone who has used Haskell for years and studied category theory / PL theory as well. And I like all of those things, just think it's sometimes not really necessary to get the mental model at all and be able to use those tools.
I think that some code duplication is also not always the worst thing. Just because strings can be "combined" through concatenation and numbers can also be "combined" through addition does not necessarily mean you need one general notion/function to capture both, especially because if you take that to an extreme then you end up with some function that "sqoogles the byamyams" (i.e. the abstraction has become so general that the only way people really understand/use it is by looking at its concrete instantiations).
> If you take Haskell and remove all the types (e.g. Scheme or JavaScript), you can have all the same abstractions, with basically the same syntax and operational behavior.
This isn't generally true, as such a language would have to be lazy by default and the property is more important than having all the types, because certain abstractions that can be expressed "naturally" in Haskell is a byproduct of non-strict semantics. Also it's the property that sometimes helps avoiding worst-case complexity of evaluations - something that eager languages have to live with at all times, no matter the abstractions.
This notion of a plain old lens has been around for a long time. See [0] for an example from 2007, but you can find this construct in functional programming at least back into the early 1990s.
However, it was hardly ever used in practice because it isn't polymorphic and, perhaps more importantly, it can only be used for accessing a single item from a record.
The reason for the explosion of interest in "optical" libraries in the 2010s is because of the development of other optics that can simultaneous access multiple fields (in order), access a functor full of fields (unordered), access field that maybe doesn't exist, pattern match on a variant, zip an indexed set of fields, etc. And all of this is done is both a composable, and transparent way.
Haskell's Traversable functors existed in 2008, and while your lens type can also be traversed, you have to explicitly convert it to a traversable to use it and, or compose it that way.
I'm sorry you feel the need to understand profunctors in order to use optics. Ideally the system would abstract that all way. Unfortunately in Haskell, some of these components leak out (how much varies from implementation to implementation) because this abstraction uses constraints and polymorphism in a very novel way. It's kinda amazing that Haskell's type system accepts it at all.
The alternative seems to involve a proliferation of casting of optics upon every use, requiring either chaining of casts or writing n^2 casting functions from between every related pair of optics.
The whole lens thing was what really turned me off to Haskell in the first place. It seems like I would never get it and it's hard to want to be in a language where you don't get something.
Is there a language where you get everything? (i.e. there's nothing you don't get)
Based on the people who have amount of "C++ experts" I've interviewed over the years who don't know much about template metaprogramming or weird rules about initialization and class layout, yet have shipped meaningful code while avoiding most of the weird parts of the language.
Even languages like Python can get weird, such as default arg initialization:
def add_item_to_list(item, list_to_add=[]):
list_to_add.append(item)
return item
a = add_item_to_list(1)
a.append(3)
b = add_item_to_list(2)
What do `a` and `b` print?
This is my go-to Python 1st-round question, and many prolific Python coders don't get through this.
From what little I've learned of Haskell so far, it seems like one can eschew the weird PL-heavy parts of the language and just be productive (with the tradeoffs the language contains, e.g. GC).
You ask this in interviews...? What makes this a good 1st round question in your mind? What is there to learn about the candidate?
It seems to me that it's one of those things you've either encountered (and go "well, that's weird") or... haven't encountered (and go "well, that's weird" upon having it explained). Have pop quizzes about the sharp edges of various languages proved useful as a filtering mechanism?
As someone who deals in low-level code and has been at several different companies... Absolutely. This stuff can be the difference between a good product and 'leaks customer secrets at runtime'. It is irresponsible to program in a language and ship code when you do not understand.
It is scary that software engineers want to be considered professionals, and yet don't want to be held responsible for understanding the tools they use.
Would you trust a bridge designer who didn't really understand the intricacies of concrete? Not me, no thank you.
The responsible thing to do is to have a robust code review culture and cover your code with enough tests that boneheaded mistakes in this vein have a very low chance of making it into production. There is no amount of experience that can prevent boneheaded mistakes. If your users' privacy depends on your programmers understanding all the footguns and weird little corner cases in the tools they use, and on implementing that understanding perfectly whenever they write code, you already don't deserve said users' trust.
Is it fair to expect a candidate claiming fluency in language X to back that claim up? Sure. But I don't think the general thrust of your comment is very productive.
If you're coming into a role saying "I'm very familiar with Python" and you don't know one of the most famous 'gotchas', that says something.
If you're coming into a role and saying "Hey, I'm not so familiar with Python but I have experience in other languages and I'd love to learn it as part of this role" I think that question is not appropriate.
It's pretty easy to just never bump into certain corners of languages -- even if you use said language extensively. I've used Python for, like, 8(ish) years professionally. I maintain a few moderately popular open source libraries. I honestly don't recall exactly what happens when you do the `list_to_add=[]` thing, because these days I pretty much never pass in a list with the purpose of having it be mutated by something else. I know is that I did try it once, a long time ago, and that it does something unexpected. But I couldn't tell ya more than that just by looking at in.
Color me dismayed to discover that I'm actually a hack fraud.
> But I couldn't tell ya more than that just by looking at in.
>
> Color me dismayed to discover that I'm actually a hack fraud.
What I said elsewhere still stands. Yes... this kind of thing is the difference between 'works well' and 'exposes private user data to other users'. I'm glad you don't use such constructs, but realistically, given python's closure rules and when data with interior mutability are captured, this situation arises more than just in this particular situation. And like I said... it can easily lead to exposing private information to other users.
The question is appropriate because it allows for further elaboration, even if a candidate isn't familiar with the call by assignment semantics of Python. It enables for a follow up discussion about differences between calls by value, calls by references, immutability and so on.
Sure... junior engineers don't need to know everything. But if someone is putting themselves out there, by offering professional services for example, then yes they ought to know this sort of thing.
it would be extremely hard for a Python developer not to encounter this core property of the language (call by assignment), unless the candidate has just graduated from a random 1-month bootcamp, which may be the purpose of discovering of with the first-round question.
I think most people would find these clearer if they used the functions view/set/etc, rather than the various operators which are just alternatives to those.
> The basic operators here are ^., .~, and %~. If you’re not a fan of funny-looking operators, Control.Lens provides more helpfully-named functions called view, set, and over (a.k.a. mapping); these three operators are just aliases for the named functions, respectively. For these exercises, we’ll be using the operators.
But the person who "is not a fan of funny-looking operators" still has to read them; and that's the part that makes funny-looking operators undesirable to them in the first place!
To be fair, if after reading that paragraph they find it so unsufferable, they can skip the article. The author makes a choice and acknowledges other opinions but, at the end of the day, has their opinion.
> I think most people would find these clearer if they used the functions view/set/etc, rather than the various operators which are just alternatives to those.
Everyone has their own tastes. Someone unfamiliar with the notation of ordinary algebra might argue for "you will have to find two numbers that the difference between the two is 10 (that is, so much as is our number) & that we make the product of these two quantities, the one multiplied by the other, exactly 1, that is, the cube of the third part of the variable" (https://www.maa.org/press/periodicals/convergence/how-tartag...) as easier to understand than "find u and v such that u - v = 10 and u v = 1", but I think most modern readers would agree that people uncomfortable with the algebra are better served by learning how to read the latter than by sticking with the former. (And, I think, also that it doesn't help either to keep the variables but replace the symbolic operations by words: `(and (eq (subtract u v) 10) (eq (mult u v) 1))`, in pseudo-Lisp.)
The thing is, basic algebra notation has two major advantages over ad-hoc operators in some Haskell library: (1) it is widely taught and understood and extremely widely applicable, and (2) each operator has a standard name that is widely explained.
In contrast, most Haskell notation I've seen is either an ad-hoc invention for some library, or it is an ASCII version of notation in a niche domain like category theory. Even Haskell's >>= operator for flatMap/bind seems to be an invention, as far as I can tell the equivalent concept in CT is Kleisli composition, denoted by a sharp sign and the regular composition operator (as far as Wikipedia shows - I'm not formally trained in CT).
Additionally, people rarely if ever give a proper name to this notation in Haskell, making it completely impenetrable to even represent the formulas in your mind. How am I supposed to read `user1 ^. name` ? When I see `∇⨯f` I know how to read it (del cross f, or nabla cross f, or curl f) because that was an explicit part of how I was taught the operation (and note that it is not an arbitrary digraph, it can really be computed as the cross product of the pseudo-vector nabla and f), but Haskell tutorials and documentation completely skip this step, in my experience.
> The thing is, basic algebra notation has two major advantages over ad-hoc operators in some Haskell library: (1) it is widely taught and understood and extremely widely applicable, and (2) each operator has a standard name that is widely explained.
> In contrast, most Haskell notation I've seen is either an ad-hoc invention for some library, or it is an ASCII version of notation in a niche domain like category theory.
But that's my point! When it was introduced, none of the symbology of school algebra satisfied condition (1) or (2); it was ad hoc (perhaps supported by some reasoning—as Reade's for the equal sign being two parallel lines, "than which no two things are more equal"—or perhaps not), and was found fully as abstruse and obfuscatory as you find symbolic operators in Haskell. Even Arabic numerals were thought a device for lies and deception, compared to good honest Roman numerals, when first introduced.) If we hadn't adopted those operators anyway, then we'd still be writing out all our equations in words as Tartaglia did, and our mathematics would be the poorer for it; had we stopped our notation earlier, we would still, as in pre-Arabic numeral times, send our children to special advanced schools to learn multiplication. New notation when first introduced is confusing and strange, but, once made commonplace, enables new thought, so that what was a niche domain becomes commonplace.
(For that matter, I'd disagree about (2). For example, there is a symbol called the vinculum that is used for many purposes in mathematics (https://en.wikipedia.org/wiki/Vinculum_(symbol)). Probably almost everyone has used that symbol for one of its meanings, but I'd argue that very likely almost no-one knows its name.)
I'm not claiming + and = have some underlying meaning, of course they were taught. My main point is that symbols are only truly useful when they are ubiquitous in their field. Before that, they are obscure and should be avoided.
If the Haskell community or the Lenses community wants to do the hard work of teaching everyone their new symbols, go right ahead. Until that work is done though, I would advice everyone to avoid using these symbols.
In regards to the vinculum, it's true that I've never heard that name, but I learned different names for the different uses - not in English though, as I took mathematics in Romanian, and some of the terminology and even notation is not the same (for example, repeated fractions are denoted using parentheses, not a vinculum - as in 1/30 = 0.0(3)).
Not that you would necessarily use the same words in Haskell, but I'm curious how you would read `user1^.name` in Pascal, or `user1->name` in C or `(*user1).name` in C?
(Edit: I'd also be curious about `x += y` and `x << y` in C.)
user1^ is what the user1 pointer points to, if I remember my Pascal correctly. It's a structure and the .name says to take the name field of that structure. So ^. isn't a digraph, it's two separate operators.
Same with your C example. It's doing the exact same thing in C, except the operators (* and .) are separated from each other.
-> is a digraph. It means the same as `*.` I don't usually pronounce it, I just think of it as itself. If I have to say it to myself mentally, I say "sub". I'm not sure I've ever tried to say it aloud to a coworker; if I did, I might have said "arrow" or something.
You do have to learn these things, just like you have to learn all the other operators. But simiones still has a point - at least the math-based operators are much more widely known and understood than the category-based ones.
But what are their names? That was the standard applied to Haskell.. that the operators needed well understood names. C++ does not have that. Worse still, C++ operators are completely unintellible without context.
For example, '>>=' (commonly called bind in haskell) is well-specified. Anything I see it used with is going to be a Monad, and thus follow certain laws. Examining the imports, I can immediately tell what any operator is.
In C++? Forget about it. The 'left-shift' operator which is supposed to shift bits to the left, can somehow also print things to standard output. In what world can the terminal be bit-shifted left? In fact, we understand this because no one reads 'std::cout << "Hello world"' as 'shift std::cout left by "Hello world", because such a thing is non-sense, whereas '1 << 2' is '1 shifted two bits to the left'.
EDIT: And then, when you add in external libraries, it gets worse. `<<` can be used for creating lexers and parsers in boost if I recall correctly. Completely lawless, and, when you survey the ecosystem, also dangerous. So many bugs in C++ and such due to this.
I agree. I do feel there is a double standard here. C gets away with using `->` and `^=` and `*foo.bar` mixes prefix and infix and postfix with hard to remember precedence rules; C uses `{` and `}` instead of `BEGIN` and `END` and no one bats an eye. But when lens libraries use operators, suddenly some people lose their mind.
I perfer `rec^.field` with operators in lens for the same reason I prefer `ptr^.field` in Pascal over hypothetically writing `ptr DEREFERENCE ACCESS field` or `ACCESS(DEREFERENCE(ptr),field). It lets me hide what is essentially just plumbing-like-grammer behind operators in order to let me focus of the parts of the program related to the business logic at hand, namely `rec`, `ptr` and `field`. Otherwise the plumbing tends to drown out the more important parts of what is going on.
First of all, some people's eyes already glaze over when looking at C. It's only accepted because it has become near ubiquitous in computing, and notation is much more palatable when it is consistent for a whole domain. This is a very important point that many people miss: the reason why Java, C#, JS, even C++ to some extent get away with lots of non-maths operators is exactly because they have chosen to copy each other. That's a very powerful thing.
Second of all, even within C, these operators are ubiquitous. You will use . and -> and * to deref and & to take the address, and = for assignment, and ++ or -- for increment/decrement, and ==, &&, ||, ! for logical operators, and the <operator>= notation for applying a change to a variable, and [] for subscripting, in almost every program you right in C. Shortening a very common operation to a symbol is much more easily acceptable than shortening a more rarely used operation, or one that's specific to a library.
Basically, the status quo in programming as I see it is this: it's OK to use symbols instead of words for (a) syntax in your language (C curly braces, dots), or (b) ubiquitous constructs that are used virtually all over any program (+=, *), or (c) if they are already well-known symbols in other popular languages, or in non-programming fields (such as the near-universal regex syntax). Even (a) and (b) benefit a whole lot from familiarity with other languages, as should be expected. They also still require proper names - which the C standard for example gives for every operator, as do C tutorials.
Thanks for all your replies. While I still think it is better to use operators in order to reduce clutter in the code, after all in order for any notation to become ubiquitous it must go through a period of not being ubiquitous, perhaps more can be done to introduce the operators in the documentation. It should be pretty easy to give all the operators names since almost all of them already have a named function implementing them.
Note that the C and C++ standards have names for all of these operators. "->" is called the "member access" operator (though so is ".", to be fair). It also has the advantage of being a pretty clear typographic symbol - it can be called "arrow".
As for <iostream> overloading the left shift operator to use for writing to an ostream, that is widely regarded as a terrible idea, and such overloading is generally frowned upon in the community. In general, <iostream> was created before C++ was even standardized, and well before good practices became established, and doesn't reflect at all more common usage of the language (see also the fact that they don't throw exceptions by default).
The only other somewhat widely used example I know where operators are overloaded with entirely different meanings than their standard usage is indeed in the boost::spirit parser generator library, where they are trying to approximate the syntax of typical parser definitions using C++ operators, with at best mixed success. And they don't just overload <<, they overload all the operators with completely different meanings - such as overloading the unary plus (+x) operator to mean "token appears 1 or more times" and even the structure dereference operator (x) to mean "token appears 0 or more times" and so on. Still, I don't think too many people are crazy enough to mix boost::spirit with regular C++ in the same file.
Note also that operator overloading is not commonly supported in other C-family languages, typically because* they are trying to avoid C++'s usage of it.
What are their names? "Member access" is the name of "->". That's a pretty well-understood and easily-understood name.
> Worse still, C++ operators are completely unintellible without context.
Um, compared to what? The C++ operators require less context to understand than the Haskell lens operators by a large amount, so this seems like a really odd criticism.
Your last two paragraphs... yeah. Operator overloading has been used for some, uh, unusual uses, even by the standards committee.
> Um, compared to what? The C++ operators require less context to understand than the Haskell lens operators by a large amount, so this seems like a really odd criticism.
This is my exact point. The C++ operators require much more context.
Suppose you see:
`v << i`
in c++. What is the type of v? What is i? Is it an int? Or an output stream? Or something else entirely! I have no idea. By the C++ standard it could be anything. In fact, it is heavily dependent on what's in scope, and indeed, several different things could overload it depending on what's in scope. Very confusing.
In haskell,
'x ^. i'
tells you everything. 'x' is any type, but 'i' is mandated to relate to 'x' in that 'i' is an optic targeting 'x' and returning some 'subfield' of x. In particular, 'i' is 'Getter a b', where 'a' is the type of 'x'. That is absolute. There is no other interpretation. Haskell type class overlap rules prevent any module imports from confusing the matter.
I agree with you about the context dependence being problematic in C++ (and that is also why overloading operators to give them entirely different meanings is deeply frowned upon, as I mentioned in other contexts).
However, I was curious - is it not possible in Haskell for two different libraries to introduce the same operator with different meanings, and still be used in the same program (though perhaps not in the same file)? That would be an unfortunate limitation all on its own, and one that mathematical notation certainly doesn't share.
In general, re-using symbols for similar purposes is also an extremely commonly used tool in mathematics, for the exact reason that constantly introducing completely new symbols is not worth the mental hassle. For example, when defining the "dot product" and "cross product" operations for vectors, entirely new symbols could have been introduced, but it was preferred to reuse common multiplication symbols (hence the names) as there is some relation to the real number multiplication operation.
> is it not possible in Haskell for two different libraries to introduce the same operator with different meanings, and still be used in the same program
It is possible, of course, just like it's possible for two different libraries to have a function of the same name. But it's always possible to determine statically (by looking at the import list, etc.) which of the instances the usage refers to.
I'll agree that overloading << to do output is a WTF moment. But at least I can hire programmers off the street who know what output is. In contrast, that last paragraph of yours looks like you were carrying around a bowl full of random jargon, tripped, and spilled it all over your keyboard.
For your C++ example, I have to know the type of v and i (and that in C++, it's possible to overload operators). For your example, I have to know what an "optic" is, what a "Getter" is, what Haskell thinks a "subfield" is (and that means that 'x' can't be any type, but has to be the kind of type that contains a subfield), and I suspect several other background things that I don't even know what they are.
C++ makes you know a bit more about the types (though even in Haskell, you had to know that x had subfields). Haskell makes you know much more concept-level context.
> In contrast, that last paragraph of yours looks like you were carrying around a bowl full of random jargon, tripped, and spilled it all over your keyboard.
It's scary that this sort of criticism is considered appropriate on hacker news, which is supposedly a forum for professional software developers. Very sad. Everything I mentioned should be taught in 2nd or 3rd year computer science course (namely, everything I mentioned was about type systems, which is a standard topic most university programs will cover)
I agree.
I find it especially odd, that people think that programming and computer science is a complex topic that takes years to learn and master, but complain about the appearance of jargon with an appeal to "ease of use".
If the concepts are the hard part to learn, then operators and technical speech are the least of your worries, but a mandatory part on the path to acquiring the art.
The most important thing math has shown me, is that notation and precision of speech take a huge part in making things clear. Yet people get heart attacks when this is applied to informatics.
> For your C++ example, I have to know the type of v and i (and that in C++, it's possible to overload operators). For your example, I have to know what an "optic" is, what a "Getter" is, what Haskell thinks a "subfield" is (and that means that 'x' can't be any type, but has to be the kind of type that contains a subfield), and I suspect several other background things that I don't even know what they are.
I still think this appeal to familiarity is dangerous. You take your knowledge about C++ as obvious and standard, and disparage knowledge about Haskell as a bowlful of jargon. But no-one comes into the world finding C++ intuitive, either; it has to be taught. And teaching C++ without referring to operator overloading, or with other attempts to obscure the way its practitioners actually talk about and practice it, can only be supported so far. Similarly here—I suspect more of the people who code with lenses in Haskell use the symbolic operators than the 'word' operators. Though I have no data to back that up, obviously it's how the writer of this tutorial thinks about them. So why shouldn't they use the format that makes sense to them, and that they hope to teach people to use?
> (and (eq (subtract u v) 10) (eq (mult u v) 1)) could be:
> (u `subtract` v `eq` 10) `and` (u `mult` v `eq` 1)
True. I used Lisp because I think it's reasonably common to use words for arithmetic operations in many Lisp dialects, but not in Haskell. But it's not the prefix notation to which I was objecting, but the idea of spelling out every operation when we've got a notation specifically designed to facilitate rapid thought and computation without that.
Lens is one of my favorite libraries, but I can understand why optics, in general, is a divisive topic. Like many things in Haskell, you won't appreciate optics without suffering through a steep learning curve.
Optics don't just save you a few keystrokes. Instead, they make tedious code that you would never write trivial. And more importantly, they're composable. So it's impossible to appreciate their effectiveness without using them on a real-world app. For example, I've worked in an app with three layers, each with its record types, and without the lens library, the pain of converting between records would have been insufferable. Instead, most developers would couple the three layers together to avoid the problem, which makes the codebase brittle.
When finding out about lenses some years ago I really felt that they could be immensely useful in the way they could potentially separate all data mapping concerns from the rest of the code base.
But I quickly ended up in dependency or type hell when I tried using them. I hope that people will take inspiration from the concept in the future.
> But I quickly ended up in dependency or type hell when I tried using them.
Wouldn’t know about the dependency issues, but the newer ‘optics’ library [https://hackage.haskell.org/package/optics] has much simpler type signatures and greatly improved error messages. Personally I don’t like it, since it hides the implementation details too much for my taste, but it seems to be becoming pretty popular in the Haskell community.
It may be a matter of experience. I probably would not be able to use them during my first or second year of slowly learning Haskell, but later when I learned to naturally think in terms of functors, applicative and traversables, I just sat down and used them.
A lot of the time when I couldn't completely untangle some type definition in lens library, I'd just go with gut feeling or guess what it does by analogy with stuff I already understood, and surprisingly often it compiled and worked like I expected.
I have as a general principle that it is difficult to understand a solution until you have the problem.
You will need to attain a certain degree of fluency in Haskell before you have the problem that deep access into structures or any of the other problems lens can solve [1] is a clearly-separated problem that rates among your biggest.
Until you reach that point, I advise not to touch lens. By the time that is your biggest problem, it is likely that your other Haskell experiences will make it not too frightening, but it's not a very good gateway into Haskell.
[1]: One of my favorite uses, though not a common one, is that you can use it to abstract a pair (thing to operate on, how to operate on it) in a way where both the elements are still fully composable, where the lens is sitting in the second spot of that tuple, and whatever is operating on that only needs the type of the "how to operate on it" part, it is oblivious to the type of the "thing to operate on". I used it to build a game engine where the engine did not have a hard-coded concept of "player 1 goes, then player 2 goes, then player 1 goes", etc.; instead, the game logic itself decided that and all the engine got was a combination of what structs to manipulate and what to perform on that struct. (The engine itself did networking and converted things to and from JSON, while letting the pluggable game logic do everything else.) An interesting use for lenses beyond just "better record syntax" as I started passing the lenses themselves around in my code as first-class values. But probably ultimately an even worse way to get into Haskell than using lenses as record syntax improvements.... my point here is just that lenses do have uses beyond record accessing, that's just their headline usage.
I must admit though that I've spent quite a lot of effort on playing with types that time, and often neglected carefully designing the architecture of my program and the way I'm applying lenses there, which made me regret it later.
> just to save a few characters and make everything more magic than it is
No, this is not the true motivation. Human attention spans on symbols, the more succinct the code is, the more meaning it can encompass in the same number of characters. Writing abstracted enough code often requires inventing new languages to hide the complexity behind abstractions.
Software development in Haskell is particularly focused on DSLs, lens language being one of them.
^.. is a relatively simple operator though, it just abstracts away traversals.
> Writing abstracted enough code often requires inventing new languages to hide the complexity behind abstractions.
Human attention may pick out symbols more easily, but human comprehension benefits from things that actually have meaning. Otherwise you end up with a soup of ASCII symbols that very few can parse since every library in Haskell seems hellbent on inventing new and useless ascii combinations for function names and operators.
Even the article goes (emphasis mine):
--- start quote ---
The basic operators here are ^., .~, and %~. If you’re not a fan of funny-looking operators, Control.Lens provides more helpfully-named functions called view, set, and over (a.k.a. mapping)
--- end quote ---
And then proceeds to use the ASCII soup instead of the more helpful function names.
It doesn't help in the least that there are multiple get operations ^. ^? ^.. Or this idiocy: "Here, a single < before the operator signifies that it returns the updated value, while a double << signifies that it returns the old value. This gets a little weird with the monoidal operator; <<>~ returns the updated value, not the old one". Bonus points. What does this do: <<<>~
Because "why have useful names when you can pretend your fake intellectual superiority by using confusing ascii symbol combinations"
> Software development in Haskell is particularly focused on DSLs, lens language being one of them.
The domain specific language for lenses is view, set, traverse, update etc. Not whatever this is.
> The domain specific language for lenses is view, set, traverse, update etc. Not whatever this is.
The word you are looking for is "script"; though that is rather ambiguous in the context of programming. I suppose "writing system" is good enough.
And now we have entirely expressed the problem: someone tried to make their code more readable by adding a donation-specific writing system! Just break out your handy Rosetta stone, and you'll be fine, right?
Yes and you tend to use ^. in Haskell where you'd use . in python, say. And . is also an operator. Similarly, you'd likely be using = in python where you'd use .~ in python. As for ^.., I'm not sure there's an idiomatic python equivalent - certainly nothing as simple as adding an extra . to the operator.
I don't understand why Perl is consistently dismissed as "line noise" and "write-only" but that characterization's rarely leveled against Haskell. It's much worse about it, IMO. At least the "line noise" variety of Perl doesn't usually dominate basic tutorials and introductions to the language.
[EDIT] Nor do Perl libraries tend to contribute to or encourage line-noise style.
I agree that FP languages sometimes exaggerate with the opeators (I think introducing new operators has its uses but the learning curve has to be traded off against the supposed benefits), but...
Perl is IMHO worse because it relies much more on global mutable context. For example, the diamond operator <> behaves differently depending on whether the magic global variable @ARGV is set.
By contrast, Haskell is typed and purely expression based, so that at least in principle you can pull apart expressions, examine them (and test them!) individually and then understand how they combine.
I kind of see idioms like lenses as part of the language.
But the language allows defining your own (infix) operators, so the core language designers weren't entirely innocent either. The abundance of custom operators in Haskell kind of made me appreciate Lisp and Forth a little more!
There are of course many good things about having your language designed by mathematicians, too.
For anyone - anyone who's faffed around with JavaScript's spread syntax when working with nested immutable data, anyway - struggling to understand the general concept or utility of optics, I'd encourage looking into monocle-ts. The motivation [0] that kicks off the README instantly made me a believer, and TypeScript's idiosyncratic type system makes using the library remarkably natural; it fits right in without needing to deal with anything analogous to Template Haskell.
I remember being stuck on Haskell lenses for a while, until I asked the brilliant Scott Wlaschin during a speakers' dinner in Lithuania to explain lenses in layman's terms to me, to which he simply replied "it's just like property getters and setters".
Short, to the point, and I immediately understood the core value prop. There's of course a lot more to it, but in essence that's what it's about...
we use lenses in react/typescript together with jotai atoms to express data paths in complex structures. it’s an amazing basic idea that is done a disservice by the proliferation of weird concepts and operators
> This isn’t meant as a lens tutorial. Prerequisites are some basic knowledge of how to use Haskell, GHCI, and familiarity of the idea of lenses, if not the specifics.
With that out of the way, it's kind of disappointing that the article isn't a tutorial. As it is, it just lists the lens operators, gives you alternate, but still non-obvious names for them, and foists some exercises upon you. It could have easily given one short example for each operator without adding any significant tedium for those who already have an idea of what they do, and then it would have been a perfectly fine tutorial.
So here's some examples, (Hoogle has some nice examples too; thanks Hoogle):
person = Person {_name = "Alice", _title = "Dr"}
-- get field with magic lens
person ^. title == "Dr"
-- create new object with new field value
person & name .~ "Bob"
== person & (name .~ "Bob")
== (name .~ "Bob") person
== Person {_name = "Bob", _title = "Dr"}
-- create new object with function applied to field value
person & name %~ lowercase
== (name %~ lowercase) person
== Person {_name = "alice", _title = "Dr"}
Its payoff doesn't seem to justify the cognitive overhead (concepts, operators, etc) that it brings.
I'd highly recommend that newcomers to Haskell avoid Lens in their own code for as long as possible. The juice really isn't worth the squeeze.