Hacker News new | past | comments | ask | show | jobs | submit | yogsototh's comments login

For being an Haskell and Clojure dev. Data manipulation in Clojure requires a mind shift coming from Haskell. But it feels a lot more straightforward in Clojure. Even if `lens` is incredible. This is like Haskell lenses was were included in the core language if you want. Of course, not type-safe. This is just that, solving problems in Haskell or in Clojure does not really require the same approach. But both ways are delightful. And personally, Clojure has always felt more natural, even though, I also love the Haskell approach.


One part of the joy that came with using Clojure is that there isn't really any bad choice. For example, I wanted a quick and dirty internal web application for admin purposes. I went with reframe. Why? Because I wanted to try it. And I knew, that I could get things done with it.

This is probably not the easiest "framework" (not sure this is a good name for it) but it was very fun. I think, if I had to progress from just a toy to a really strong, user facing UI, I think it would still be a pretty good choice. If some feature is missing, or something doesn't work as I would have liked, I know it will not be difficult to correct it myself.

I know that this could feel overwhelming, but this is freedom in a world where you expect there exists a single "best practice". As long as the tool is powerful enough it will be fine. On my end, I appreciate the fact there is not a single "web framework" in Clojure. Instead you have tons of libs you can use that work well together because they are all tied with a few common concepts (like ring for example). If you don't like to choose the libs. There are a few people that provide a starter pack with somewhat nice bundle of libs in a single system like pedestal or luminus for only citing two of them.

My recommendation is to not lose too much time looking for the best choice. Most of them will be good.


I am pretty confident the spammers will remove the `+` suffix from your email. And this is why I find the Apple fake email building solution a lot better because they build a fully different email per service. No way for the service to be able to cheat and discover my real email address from the one I give them.

Still a smart enough system might be able to discover a valid email from my other id info, like my name. But this start to be a lot of work, while just `s/+[^@]*@//` is easy enough to do.


I started worrying about the `+` address functionality as well, so I set up postfix aliases with `me.%@domain` (I use postgres for domains/aliases/accounts) and then have my virtual_alias_map run the query `SELECT a1.goto FROM alias a1 LEFT JOIN alias a2 on (a2.address = '%s') WHERE '%s' = a1.address OR ('%s' LIKE a1.address AND a1.active = true AND a2.address IS NULL)` - I know have `.` address functionality and can do the same functionality. It's much more common for email addresses to have `.` in them, so its' less likely to trigger alarm bells.


I am very glad of the architecture we use at my current company. This is a monolith, BUT with the capacity to be deployed as microservices if needed.

The code is structured in such a way that you only start the sub services you need. We have node that launch almost all services, some only a few.

If we need to scale just a particular part of the system we can easily just scale the same node but configuring it just for the sub services we need.


Let me talk about my experience. I was (yes was) mostly an Haskeller. I loved using Haskell, and even a long time ago (in 2007 I think) when I learned it, the book to learn the basic of the language had snipped that no longer worked.

But I still loved, it. And it changed every year, and I was even part of the proponent of the changes.

But after a while, you realise that your old project stop working. That if you want to use a new library, you also need to update many dependencies. But each one has breaking changes, so you start updating your code.

Mainly, if you wrote some code, and forget about it, it will no longer work or will be very hard to use with newer libraries. After a while this became very tedious.

In Clojure, the code I wrote 10 years ago is still working perfectly today. The application still launches.

If a bug is discovered and I need to fix it, for example, by upgrading or adding a new lib, I am not afraid to loose hours finding an fixing my existing code that suddenly become deprecated.

So yes, stability is VERY important for me now.

And last but not least, Rich Hickey talked about it in this talk and make a lot of very great points:

https://piped.video/watch?v=oyLBGkS5ICk


Let me add my 2cents on this subject.

I feel that due to its functional nature or perhaps this is just the community bias, Clojurists tend to prefer mixing lower level libraries to build their web application instead of relying on a specific big web framework like RoR for example. Luminus would be one RoR-like system.

But today, I think people would probably choose reitit for the writing the API and use a clojurescript framework for the frontend or perhaps still use reitit to generate HTML pages using hiccup.


That's really great! Thank you for that.

I saw some people asking for a good RSS reader. Personally I will use elfeed, and I also use elfeed-org so I could import the OPML file and export it into my existing RSS feeds.


I use freshrss with combination of Reeder for Mac/iOS.


My combination of choice too.

The fact that FreshRSS can fetch full article contents for truncated RSS feeds is precisely the thing you need in order to make RSS useful today. Not the biggest fan of the UI but this one feature makes the whole thing so fucking good.


This is pretty nice, but for my blog I noticed the latest article are wrong. Check yannesposito.com


It looks like the description I pulled from your site was "Most recent articles", which I think made things confusing haha

  <meta name="description" content="Most recent articles">
If you look at fetch.js in the repo, it just pulls the top posts from Algolia search.


https://yannesposito.com (or its clone https://her.esy.fun)

I blog about functional programming (haskell, clojure), but also emacs org-mode, thing like these. I sometimes tell myself I should invest more time to write down more about my thoughts there.

I am happy to be part of the 512kb club.

Here are a few posts that were somehow popular:

- https://her.esy.fun/Scratch/en/blog/Learn-Vim-Progressively/

- https://her.esy.fun/Scratch/en/blog/Haskell-the-Hard-Way/ (updated by https://her.esy.fun/posts/0010-Haskell-Now/index.html )

- https://her.esy.fun/Scratch/en/blog/Yesod-tutorial-for-newbi...


Personally I find that LISP syntax remove a layer of complexity by directly exposing the AST to my brain instead of adding a layer of internal parsing.

    1 + x * 2 - 3 % x
is longer to decipher than

    (% (- (+ (* x 2) 1) 3) x)
which is itself harder than

    (-> x (* 2) (+ 1) (- 3) (% x))
But it takes a while to be used to it. And yes, it really helps writing macros, but I wouldn't say this as always be a good thing. Macros are far from being the alpha and omega of programming, as they add an implicit layer of transformation to your code making it easier to write but very often harder to read and reason about.


I started working through Crafting Interpreters, building up a language syntax and grammar from scratch. A lot of work and 75 pages of lex/parse logic and we now have a AST... that we can debug and inspect by looking directly at its sexp representation.

It was the ah-ha moment for me... why not express the source cost directly as that AST? Most languages require lots of ceremony and custom rules just to get here. Sexps are a step ahead (inherently simpler) since they're already parsable as an unambiguous tree structure. It's hard to unsee - reading any non-Lisp language now feels like an additional layer of complexity hiding the real logic.


Much of the complexity and error reporting that exists in the lexer or parser in a non-Lisp language just gets kicked down the road to a later phase in a Lisp.

Sure, s-exprs are much easier to parse. But the compiler or runtime still needs to report an error when you have an s-expr that is syntactically valid but semantically wrong like:

    (let ())
    (1 + 2)
    (define)
Kicking that down the road is a feature because it lets macros operate at a point in time before that validation has occurred. This means they can accept as input s-exprs that are not semantically valid but will become after macro expansion.

But it can be a bug because it means later phases in the compiler and runtime have to do more sanity checking and program validation is woven throughout the entire system. Also, the definition of what "valid" code is for human readers becomes fuzzier.


> later phases in the compiler and runtime have to do more sanity checking

But they always have to do all the sanity checking they need, because earlier compiler stages might introduce errors and propagate errors they neglect to check.

> program validation is woven throughout the entire system

Also normal and unavoidable.

As far as processing has logical phases and layers, validation aligns with those layers (the compiler driver ensures that input files can be read and have the proper text encoding, the more language-specific lexer detects mismatched delimiters and unrecognized keywords, and so on); combining phases, e.g. building a symbol table on the go to detect unidentified identifiers before parsing is complete, is a deliberate choice to improve performance but increase complication.


> because earlier compiler stages might introduce errors and propagate errors they neglect to check.

Static analyzers for IDEs need to handle erroneous code in later phases (for example, being able to partially type check code that contains syntax errors). But, in general, I haven't seen a lot of compiler code that redundantly performs the same validation that was already done in earlier phases. The last thing you want to do when dealing with optimization and code generation is also re-implement your language's type checker.


Those rules help reduce runtime surprises though, to be fair. It's not like they exist for not purpose. It directly represents the language designer making decisions to limit what is a valid representation in that language. Rule #1 of building robust systems is making invalid state unrepresentable, and that's exactly what a lot of languages aim to do.


Note that this approach has been reinvented with great industry success (definitions may differ) at least twice - once in XML and another time with the god-forsaken abomination of YAML, both times without the lisp engine running in the background which actually makes working with ASTs a reasonable proposition. And I’m not what you could call a lisp fan.


You left out the more common options:

    (1 + x * 2 - 3) % x
and

    (1 + (x * 2) - 3) % x
both are clearer than the S-expression in my opinion


I don't find them to be clearer, with the background of knowing many language, because now I have to worry about precedence and I better double check, to not get it wrong or read it wrong.


I agree, and this is why for math expressions that aren't just composition of functions that aren't basic operators, I like to use a macro that lets me type them in as infix. It's the one case where lispy syntax just doesn't work well, IMO.


As someone who isn't a trained programmer (and has no background or understanding of lisp) that looks like you took something sensible and turned it into gibberish.

Is there a recommended "intro to understanding lisp" resource out there for someone like myself to dive in to?


  (-> x (* 2) (+ 1) (- 3) (% x))
The part that is confusing if you don't know Clojure is (->). This a thread macro, and it passes "x" through a list of functions.

So it basically breaks this down into a list of instructions to do to x. You will multiply it by 2, add 1 to it, take 3 from it, then do the modulus by the original value of x (the value before any of these steps).

Clojurists feel like this looks more readable than the alternative, because you have a list of transformations to read left to right, vs this

  (% (- (+ (* x 2) 1) 3) x)
Which is the most unreadable of them all, to me.


I feel like there's a joke in here somewhere about a backwards Forth dialect, but this also reminds of of chaining idioms used in other languages.

Currying with OOP:

  res = x
    .mult(2)
    .add(1)
    .sub(3)
    .mod(x)
Currying with assignment operators:

  res = x
  res *= 2
  res += 1
  res -= 3
  res %= x
Naming things instead of Currying:

  scale = 2
  offset = 1 - 3
  with_scale = x * scale
  with_offset = with_scale + offset
  result = with_offset % x
Or aping that in Scheme:

  (let* ((scale 2)
         (offset (- 1 3))
         (with_scale (* x scale))
         (with_offset (+ with_scale offset)))
       (remainder with_offset x))


this is very readable if you know to read the operations inside-out instead of left-to-right


But that reading requires looking back and forth to read the operator and the operand. The further you move out the more you shift your eyes and the harder it becomes to quickly jump back to the level of nesting that you are currently on at the other side.


Be that as it may, it's less readable than the other ones, to me


  '(
    1. This is a list: ( ). Everything is a list. Data structures are all lists.
    2. A program is a list of function calls
    3. Function calls are lists of instructions and parameters. For (A B C), A is the function name, B and C are parameters.
    4. If you don't want to execute a list as a function, but as data, you 'quote' it using a single quote mark '(A B C)
    5. Data is code, code is data.)


> Is there a recommended "intro to understanding lisp" resource out there for someone like myself to dive in to?

Practical Common Lisp - https://gigamonkeys.com/book/

Casting SPELs in Lisp - http://www.lisperati.com/casting.html

Structure and Interpretation of Computer Programs - https://mitp-content-server.mit.edu/books/content/sectbyfn/b...

One of many prior discussions here on HN: https://news.ycombinator.com/item?id=22913750

... amongst many resources for learning LISP.


The language fundamentals of "Clojure for the Brave and True" (best intro to Clojure book IMO) is excellent (if you consider Clojure a lisp). I find the author's style/humor engaging.

https://www.braveclojure.com/clojure-for-the-brave-and-true/


It is gibberish. The expression means:

  (1 + (x * 2)) - (3 % x)
So the correct S-exp (let's use mod for modulo rather than %):

  (+ 1
     (* x 2)
     (- (mod 3 x)))

It's a sum of three terms, which are 1, (* x 2) and something negated (- ...), which is (mod 3 x): remainder of 3 modulo x.

The expression (% (- (+ (* x 2) 1) 3) x) corresponds to the parse

  ((x * 2 + 1) - 3) % x
I would simplify that before anything by folding the + 1 - 3:

  (x * 2 - 2) % x
Thus:

  (% (- (* 2 x) 2) x).
Also, in Lisps, numeric constants include the sign. This is different from C and similar languages where -2 is a unary expression which negates 2: two tokens.

So you never need this: (- (+ a b) 3). You'd convert that to (+ a b -3).

Trailing onstant terms in formulas written in Lisp need extra brackets around a - function call.


In real Lisp code you'd likely indent it something like this:

    (% 
      (-
        (+ (* x 2)
           1)
        3)
      x)
This makes the structure clearer, although it's still wasteful of space, and you still have to read it "inside-out". The thread macro version would be:

  (-> x
    (* 2)
    (+ 1)
    (- 3)
    (% x))
It's more compact, there's no ambiguity about order-of-operations, and we can read it in order, as a list of instructions:

"take x, times it by 2, add one, subtract 3, take modulus with the original x".

It's pretty much how you'd type it into a calculator.

EDIT: care to explain the downvote?


For what it's worth (speaking only for myself), I could not live without the threading macros (-> and ->>) in Clojure. Below is an example of some very involved ETL work I just did. For me this is very readable, and I understand if others have other preferences.

(defn run-analysis [path]

  ;load data from converted arrow file
  (let [data (load-data path)]

    (-> data
        ;; Calc a Weeknumber, Ad_Channel, and filter Ad_Channel for retail
        add-columns
        ;; Agg data by DC, Store, WeekNum, Item and sum qty and count lines
        rolled-ds
        ;; Now Agg again, this time counting the weeks and re-sum qty and lines
        roll-again)))

;;Run the whole thing

(time (run-analysis arrow-path))

This code processes 27 million lines in 75 seconds (shout out to https://techascent.github.io/tech.ml.dataset/100-walkthrough... library)


If such an expression gets indented at all, it would be more like this:

  (% (- (+ (* x 2)
           1)
        3)
     x)


> In real Lisp code you'd likely indent it something like this:

Not only would that not be idiomatic, the operator for modulus in Common LISP is mod not %. and the brackets you and the parent used in the s-expr are around the wrong groups of symbols. So you're more likely to see:

(mod (* (+ 1 x) (- 2 3)) x)

or maybe with some limited indentation, such as:

(mod

      (* (+ 1 x) (- 2 3)) 

      x)


Nobody said it had to be Common Lisp. I'm going by the notation the grandparent commenter used. My point was that indentation can clarify the structure of nested sexps vs putting them on one line. And that is actually what people do. "mod" vs "%" hasn't the least to do with it. This isn't even really about arithmetic; those are just at-hand examples the GP commenter chose. Could just as well have been

  (foo 
    (bar
      (baz (bax x 2)
           "hello")
      "world")
    "!")
>the brackets you and the parent used in the s-expr are around the wrong groups of symbols

No they're not. Yours is wrong. Multiplication has higher priority than addition so the order of evaluation begins with (x * 2) not (1 + x).


> No they're not. Yours is wrong. Multiplication has higher priority than addition so the order of evaluation begins with (x * 2) not (1 + x).

OK, I shouldn't have gone that far.

FWIW, modulus has the same operator precedence as multiplication and division.

So really, it is more like:

(- (+ 1 (* x 2))

   (mod 3 x))
or using the increment function:

(- (1+ (* x 2))

   (mod 3 x))


Alright well I guess this is an object lesson in why infix notation is bad -- nobody can remember all the precedence rules once you get beyond +-*/^.


We'll call it even :-)


Interestingly the second form is just infix notation where every operator has the same precedence and thus is evaluated left to right. That says to me that it's not infix notation that's inherently weird but instead it's the operator precedence rules of mathemetical infixes that are weird.


> that looks like you took something sensible and turned it into gibberish

This is the main thing I use Lisp (well, Guile Scheme) for. I used to use bc for little scratch pad calculations, now I usually jump into Scheme and do calculations. I don't recall if I thought it looked like gibberish at first but it's intuitive to me now.



Reject PEMDAS, return to monky:

x * 2 + 1 - 3 % x

https://mlajtos.mu/posts/new-kind-of-paper-2


My preferred version of this:

x.*(2).+(1).-(3).%(x)

Unfortunately our brains are broken by pemdas and need clear delineations to not get confused; this syntax also extends to multiple arguments and is amenable to nesting.


When I learned APL, the expression evaluation order at first seemed odd (strictly right to left with no operator presence, 5*5+4 evals to 45 not 29). After working with it a couple of hours I came to appreciate its simplicity, kind of like the thread operator in your last example.


Well, the easiest way to write

  1 + x * 2 - 3 % x
would just be "x-2".

But if we're talking more generally, if I have an expression like

  2*x^3 + x^2 - 5*x
a trained eye immediately can read off the coefficients of the polynomial and I'm not sure if that's true of

  (+ (* 2 (^ x 3)) (^ x 2) (- (* 5 x)))


For writing a program, the s-expression form might become:

    (+ (* 2 (^ x 3))
       (^ x 2)
       (- (* 5 x)))
Whereas:

    2*x^3 +
    x^2 -
    5*x
Would probably error out in most languages, due to parsing issues and ambiguity. Even worse ambiguity, if you put the signs in front, as then every line could be an expression by itself:

    2*x^3
    + x^2
    - 5*x
Could be 3 expressions or 2 or 1.


It might do the wrong thing in some languages but wouldn't necessarily raise a compiler error, and I'm fairly certain e.g. sympy should have no issue with it.


> `(-> x (* 2) (+ 1) (- 3) (% x))`

Love the single pipe operator. What language works this way?


That's a clojure norm, it may exist in other lisps.


That's how my brain feels. it connects informations (compound terms) to entities directly, it's almost minimized information required to represent something, unlike algol based languages.


I prefer infix over prefix. And don't forget postfix

    x 3 1 2 x * + - %
I definitely prefer white-space over commas and other syntactic obstacle courses.


I mean I would probably punch that in as

    x 2 * 1 + (-) 3 + x %
When I was using my rpn calculator. (-) flips the sign of the number on top of the stack.

Or use flip -. But you can avoid growing the stack long enough to be confusing.


Using piping for arithmetic is reminscent of grade school arithmetic dictations.

"Start with 10; add 5; divide by 3; ..."

:)


If the language has no operator precedence then

  1+x*2-3%x
is just as easy if not easier to decipher compared to both of your other examples imo. The above is equivalent to

  (1+(x*(2-(3%x)))) 
in APL/j/k. You get used to it pretty quickly.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: