For being an Haskell and Clojure dev. Data manipulation in Clojure requires a mind shift coming from Haskell. But it feels a lot more straightforward in Clojure. Even if `lens` is incredible. This is like Haskell lenses was were included in the core language if you want. Of course, not type-safe.
This is just that, solving problems in Haskell or in Clojure does not really require the same approach. But both ways are delightful. And personally, Clojure has always felt more natural, even though, I also love the Haskell approach.
One part of the joy that came with using Clojure is that there isn't really any bad choice. For example, I wanted a quick and dirty internal web application for admin purposes. I went with reframe. Why? Because I wanted to try it. And I knew, that I could get things done with it.
This is probably not the easiest "framework" (not sure this is a good name for it) but it was very fun. I think, if I had to progress from just a toy to a really strong, user facing UI, I think it would still be a pretty good choice. If some feature is missing, or something doesn't work as I would have liked, I know it will not be difficult to correct it myself.
I know that this could feel overwhelming, but this is freedom in a world where you expect there exists a single "best practice". As long as the tool is powerful enough it will be fine. On my end, I appreciate the fact there is not a single "web framework" in Clojure. Instead you have tons of libs you can use that work well together because they are all tied with a few common concepts (like ring for example). If you don't like to choose the libs. There are a few people that provide a starter pack with somewhat nice bundle of libs in a single system like pedestal or luminus for only citing two of them.
My recommendation is to not lose too much time looking for the best choice. Most of them will be good.
I am pretty confident the spammers will remove the `+` suffix from your email. And this is why I find the Apple fake email building solution a lot better because they build a fully different email per service. No way for the service to be able to cheat and discover my real email address from the one I give them.
Still a smart enough system might be able to discover a valid email from my other id info, like my name. But this start to be a lot of work, while just `s/+[^@]*@//` is easy enough to do.
I started worrying about the `+` address functionality as well, so I set up postfix aliases with `me.%@domain` (I use postgres for domains/aliases/accounts) and then have my virtual_alias_map run the query `SELECT a1.goto FROM alias a1 LEFT JOIN alias a2 on (a2.address = '%s') WHERE '%s' = a1.address OR ('%s' LIKE a1.address AND a1.active = true AND a2.address IS NULL)` - I know have `.` address functionality and can do the same functionality. It's much more common for email addresses to have `.` in them, so its' less likely to trigger alarm bells.
Let me talk about my experience.
I was (yes was) mostly an Haskeller. I loved using Haskell, and even a long time ago (in 2007 I think) when I learned it, the book to learn the basic of the language had snipped that no longer worked.
But I still loved, it. And it changed every year, and I was even part of the proponent of the changes.
But after a while, you realise that your old project stop working.
That if you want to use a new library, you also need to update many dependencies. But each one has breaking changes, so you start updating your code.
Mainly, if you wrote some code, and forget about it, it will no longer work or will be very hard to use with newer libraries. After a while this became very tedious.
In Clojure, the code I wrote 10 years ago is still working perfectly today.
The application still launches.
If a bug is discovered and I need to fix it, for example, by upgrading or adding a new lib, I am not afraid to loose hours finding an fixing my existing code that suddenly become deprecated.
So yes, stability is VERY important for me now.
And last but not least, Rich Hickey talked about it in this talk and make a lot of very great points:
I feel that due to its functional nature or perhaps this is just the community bias, Clojurists tend to prefer mixing lower level libraries to build their web application instead of relying on a specific big web framework like RoR for example. Luminus would be one RoR-like system.
But today, I think people would probably choose reitit for the writing the API and use a clojurescript framework for the frontend or perhaps still use reitit to generate HTML pages using hiccup.
I saw some people asking for a good RSS reader.
Personally I will use elfeed, and I also use elfeed-org so I could import the OPML file and export it into my existing RSS feeds.
The fact that FreshRSS can fetch full article contents for truncated RSS feeds is precisely the thing you need in order to make RSS useful today. Not the biggest fan of the UI but this one feature makes the whole thing so fucking good.
I blog about functional programming (haskell, clojure), but also emacs org-mode, thing like these. I sometimes tell myself I should invest more time to write down more about my thoughts there.
Personally I find that LISP syntax remove a layer of complexity by directly exposing the AST to my brain instead of adding a layer of internal parsing.
1 + x * 2 - 3 % x
is longer to decipher than
(% (- (+ (* x 2) 1) 3) x)
which is itself harder than
(-> x (* 2) (+ 1) (- 3) (% x))
But it takes a while to be used to it.
And yes, it really helps writing macros, but I wouldn't say this as always be a good thing. Macros are far from being the alpha and omega of programming, as they add an implicit layer of transformation to your code making it easier to write but very often harder to read and reason about.
I started working through Crafting Interpreters, building up a language syntax and grammar from scratch. A lot of work and 75 pages of lex/parse logic and we now have a AST... that we can debug and inspect by looking directly at its sexp representation.
It was the ah-ha moment for me... why not express the source cost directly as that AST? Most languages require lots of ceremony and custom rules just to get here. Sexps are a step ahead (inherently simpler) since they're already parsable as an unambiguous tree structure. It's hard to unsee - reading any non-Lisp language now feels like an additional layer of complexity hiding the real logic.
Much of the complexity and error reporting that exists in the lexer or parser in a non-Lisp language just gets kicked down the road to a later phase in a Lisp.
Sure, s-exprs are much easier to parse. But the compiler or runtime still needs to report an error when you have an s-expr that is syntactically valid but semantically wrong like:
(let ())
(1 + 2)
(define)
Kicking that down the road is a feature because it lets macros operate at a point in time before that validation has occurred. This means they can accept as input s-exprs that are not semantically valid but will become after macro expansion.
But it can be a bug because it means later phases in the compiler and runtime have to do more sanity checking and program validation is woven throughout the entire system. Also, the definition of what "valid" code is for human readers becomes fuzzier.
> later phases in the compiler and runtime have to do more sanity checking
But they always have to do all the sanity checking they need, because earlier compiler stages might introduce errors and propagate errors they neglect to check.
> program validation is woven throughout the entire system
Also normal and unavoidable.
As far as processing has logical phases and layers, validation aligns with those layers (the compiler driver ensures that input files can be read and have the proper text encoding, the more language-specific lexer detects mismatched delimiters and unrecognized keywords, and so on); combining phases, e.g. building a symbol table on the go to detect unidentified identifiers before parsing is complete, is a deliberate choice to improve performance but increase complication.
> because earlier compiler stages might introduce errors and propagate errors they neglect to check.
Static analyzers for IDEs need to handle erroneous code in later phases (for example, being able to partially type check code that contains syntax errors). But, in general, I haven't seen a lot of compiler code that redundantly performs the same validation that was already done in earlier phases. The last thing you want to do when dealing with optimization and code generation is also re-implement your language's type checker.
Those rules help reduce runtime surprises though, to be fair. It's not like they exist for not purpose. It directly represents the language designer making decisions to limit what is a valid representation in that language. Rule #1 of building robust systems is making invalid state unrepresentable, and that's exactly what a lot of languages aim to do.
Note that this approach has been reinvented with great industry success (definitions may differ) at least twice - once in XML and another time with the god-forsaken abomination of YAML, both times without the lisp engine running in the background which actually makes working with ASTs a reasonable proposition. And I’m not what you could call a lisp fan.
I don't find them to be clearer, with the background of knowing many language, because now I have to worry about precedence and I better double check, to not get it wrong or read it wrong.
I agree, and this is why for math expressions that aren't just composition of functions that aren't basic operators, I like to use a macro that lets me type them in as infix. It's the one case where lispy syntax just doesn't work well, IMO.
As someone who isn't a trained programmer (and has no background or understanding of lisp) that looks like you took something sensible and turned it into gibberish.
Is there a recommended "intro to understanding lisp" resource out there for someone like myself to dive in to?
The part that is confusing if you don't know Clojure is (->). This a thread macro, and it passes "x" through a list of functions.
So it basically breaks this down into a list of instructions to do to x. You will multiply it by 2, add 1 to it, take 3 from it, then do the modulus by the original value of x (the value before any of these steps).
Clojurists feel like this looks more readable than the alternative, because you have a list of transformations to read left to right, vs this
But that reading requires looking back and forth to read the operator and the operand. The further you move out the more you shift your eyes and the harder it becomes to quickly jump back to the level of nesting that you are currently on at the other side.
'(
1. This is a list: ( ). Everything is a list. Data structures are all lists.
2. A program is a list of function calls
3. Function calls are lists of instructions and parameters. For (A B C), A is the function name, B and C are parameters.
4. If you don't want to execute a list as a function, but as data, you 'quote' it using a single quote mark '(A B C)
5. Data is code, code is data.)
The language fundamentals of "Clojure for the Brave and True" (best intro to Clojure book IMO) is excellent (if you consider Clojure a lisp). I find the author's style/humor engaging.
So the correct S-exp (let's use mod for modulo rather than %):
(+ 1
(* x 2)
(- (mod 3 x)))
It's a sum of three terms, which are 1, (* x 2) and something negated (- ...),
which is (mod 3 x): remainder of 3 modulo x.
The expression (% (- (+ (* x 2) 1) 3) x) corresponds to the parse
((x * 2 + 1) - 3) % x
I would simplify that before anything by folding the + 1 - 3:
(x * 2 - 2) % x
Thus:
(% (- (* 2 x) 2) x).
Also, in Lisps, numeric constants include the sign. This is different from C and similar languages where -2 is a unary expression which negates 2: two tokens.
So you never need this: (- (+ a b) 3). You'd convert that to (+ a b -3).
Trailing onstant terms in formulas written in Lisp need extra brackets around a - function call.
In real Lisp code you'd likely indent it something like this:
(%
(-
(+ (* x 2)
1)
3)
x)
This makes the structure clearer, although it's still wasteful of space, and you still have to read it "inside-out". The thread macro version would be:
(-> x
(* 2)
(+ 1)
(- 3)
(% x))
It's more compact, there's no ambiguity about order-of-operations, and we can read it in order, as a list of instructions:
"take x, times it by 2, add one, subtract 3, take modulus with the original x".
It's pretty much how you'd type it into a calculator.
For what it's worth (speaking only for myself), I could not live without the threading macros (-> and ->>) in Clojure. Below is an example of some very involved ETL work I just did. For me this is very readable, and I understand if others have other preferences.
(defn run-analysis [path]
;load data from converted arrow file
(let [data (load-data path)]
(-> data
;; Calc a Weeknumber, Ad_Channel, and filter Ad_Channel for retail
add-columns
;; Agg data by DC, Store, WeekNum, Item and sum qty and count lines
rolled-ds
;; Now Agg again, this time counting the weeks and re-sum qty and lines
roll-again)))
> In real Lisp code you'd likely indent it something like this:
Not only would that not be idiomatic, the operator for modulus in Common LISP is mod not %. and the brackets you and the parent used in the s-expr are around the wrong groups of symbols. So you're more likely to see:
Nobody said it had to be Common Lisp. I'm going by the notation the grandparent commenter used. My point was that indentation can clarify the structure of nested sexps vs putting them on one line. And that is actually what people do. "mod" vs "%" hasn't the least to do with it. This isn't even really about arithmetic; those are just at-hand examples the GP commenter chose. Could just as well have been
(foo
(bar
(baz (bax x 2)
"hello")
"world")
"!")
>the brackets you and the parent used in the s-expr are around the wrong groups of symbols
No they're not. Yours is wrong. Multiplication has higher priority than addition so the order of evaluation begins with (x * 2) not (1 + x).
Interestingly the second form is just infix notation where every operator has the same precedence and thus is evaluated left to right. That says to me that it's not infix notation that's inherently weird but instead it's the operator precedence rules of mathemetical infixes that are weird.
> that looks like you took something sensible and turned it into gibberish
This is the main thing I use Lisp (well, Guile Scheme) for. I used to use bc for little scratch pad calculations, now I usually jump into Scheme and do calculations. I don't recall if I thought it looked like gibberish at first but it's intuitive to me now.
Unfortunately our brains are broken by pemdas and need clear delineations to not get confused; this syntax also extends to multiple arguments and is amenable to nesting.
When I learned APL, the expression evaluation order at first seemed odd (strictly right to left with no operator presence, 5*5+4 evals to 45 not 29). After working with it a couple of hours I came to appreciate its simplicity, kind of like the thread operator in your last example.
For writing a program, the s-expression form might become:
(+ (* 2 (^ x 3))
(^ x 2)
(- (* 5 x)))
Whereas:
2*x^3 +
x^2 -
5*x
Would probably error out in most languages, due to parsing issues and ambiguity. Even worse ambiguity, if you put the signs in front, as then every line could be an expression by itself:
It might do the wrong thing in some languages but wouldn't necessarily raise a compiler error, and I'm fairly certain e.g. sympy should have no issue with it.
That's how my brain feels. it connects informations (compound terms) to entities directly, it's almost minimized information required to represent something, unlike algol based languages.