Hacker News new | past | comments | ask | show | jobs | submit login
Why Racket? Why Lisp? (2014) (beautifulracket.com)
202 points by davikr on Sept 14, 2022 | hide | past | favorite | 151 comments



A fun essay that really tries to nail down the "Why" in a real-world way.

It mentions that functional programming removes the tripwire of state mutation as one of the "whys", but my experience slightly differs here.

I found, after years of struggling with the "why" of object-oriented programming and its early obsessions with design patterns, that functional programming simply resonated with me personally. To this day, I have a bit of inner cringe when reading (for example) Java code.

Early in my career, I thought my lack of "getting OOP" was due to some profound missing piece of understanding. But then Lisp (and Clojure) just made sense. Thinking functionally just works for me. It's not so much that I couldn't use OOP but just that each time I worked in deeper OOP projects, I felt some internal friction or vague uneasiness with the code.

I did find that functional programming made me significantly more comfortable with OOP languages. YMMV.


Which is funny, because OOP should be[1] about the in-between of the objects: the messages[2]. Not about the objects themselves or the classes and GOF design patterns.

With Common Lisp you get generic functions and design your classes around those.

[1] “OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I'm not aware of them.” - Alan Kay (http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay...)

[2] “The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be.” - Alan Kay (https://wiki.c2.com/?AlanKayOnMessaging)


We should stop invoking Alan Kay's ancient opinion in OOP discussions. I suggest we rebrand his flavour of programming as "message oriented programming" (MOP), thank it for its continuing contributions to the field, and recognize that contemporary OOP is a valid but different solution space.


Rather, we should insist on calling object oriented programming what it is, and come up with an acronym for the poor imitations.

How about Class-Reified Abstract Programming? If the shoe fits...


Modern OOP isn't a poor imitation of anything, it's a distinct approach with its own merits. The No True OOP meme is old and tired, and frankly pointless.


Agreed. Smalltalk is Smalltalk and corporate OOP(Java/C#) is what it is. Neither is superior or more pure.


Smalltalk came after Simula, which created the "poor imitation" you refer to. The reason you remember Kay instead of Simula is because Simula's ideas actually worked and so instantly got adopted by too many languages to care where it came from.


It’s imperative programming with dots/infix sigils and either a little or a lot of ceremony.


In this era, DeepClass is a kool name.


Too much mutation leads to deep state.


Clarity is certainly important, in which case I could have said “OOP should have been...”

Except: Smalltalk still lives (Squeak) and Erlang takes messaging to the extreme.


I've not used Erlang much, really, but it seems like a really nice fusion of messaging across agents and immutable state within agents. Elixir has long been on my list of languages to properly pick up...


Also since Simula proceeded Smalltalk, and C++ is inspired by Simula, and Java by C++.


I always take a point to address the "Java coding in C++", as what actually happened was Java adopting what was the C++ approach to library development before Oak turned into Java.


That's funny but for me Erlang is much more MOP than Smalltalk


Sure. Reading the qualifications that Kay makes at the end of his quote (above), he leaves room for Erlang to be MOP. I suspect that he just wasn't familiar with the language.


It's probably even more humorous if you understand that Scheme was created because Gerald Jay Sussman and Guy Steele wanted a way to experiment with Carl Hewitt's actor model, which is based on message passing. Which, itself, was based on Simula, Smalltalk, etc.

Sussman and Steele came up with a way to do lexical scoping in a LISP, thus closures in LISP were born. They then acknowledged that actors and closures are essentially the same concept (I'm assuming with mutation, i.e. Scheme's "set!" and friends because otherwise message passing to update independent actors would be meaningless).

And to put the cherry on the top, what do Java, Common Lisp, and Scheme all have in common? Guy L. Steele worked on the language specs of all of those. Which makes the dig against Java a bit hilarious, though understandable.

http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/m...

> [...] And by such examples one may be led into the functional style of programming, which is possible within Lisp/Scheme, though not required.


"actors and closure are the same concept":

Do you have a good article explaining this ? To me actors is mainly a pattern for dealing with concurrency, and closures seem completely unrelated. But i assume i'm missing something...


The Actor model is interesting because it wasn't originally a model for concurrency. It expresses concurrency semantics naturally, which is why it's been used that way. But, believe it or not, originally it was a mathematical theory of computing comparable to the Lambda Calculus.

If you'd like to explore this space look up the original papers by Carl Hewitt. He's also written some more recent papers on the subject. Also the "Lambda Papers" by Sussman and Steele are instructive. They deal with implementing various control structures and computational patterns with lambdas. Unfortunately there isn't a "Lambda the Ultimate Object" which could be written. I'm guessing they didn't go there because at the time OOP hadn't taken off and may have seemed obvious to the people who where involved in Lisp/OOP at the time.

In his essay "Objects Have not Failed" Guy Steele addresses the history that deckard1 mentions.

[1] "Viewing Control Structures as Patterns of Passing Messages" - Carl Hewitt

[2] "Actor Model of Computation: Scalable Robust Information Systems" - Carl Hewitt

[3] The Lambda Papers - https://research.scheme.org/lambda-papers/

[4] "Objects Have Not Failed" - Guy Steele, https://www.dreamsongs.com/ObjectsHaveNotFailedNarr.html


Wasn't it specifically garbage collection that he contributed?


nope. GC is thanks to John McCarthy, himself, when he invented LISP around 1959.


Common Lisp also gets you the Common Lisp Object System.


The lecture about GOF design patterns were the first OOP lecture in university where I actually felt I learned something useful.


That's somewhat tragic.


I'm pretty similar.

People like to ask "why should I use <insert language>?" I never really think about it too much - I don't pull out my weights and measures, and say "well, language X is 83.72% optimal for the problem, but language Y is 91.22% optimal, obviously using language X is a huge mistake!"

If something is obviously better for a problem, I'll pick that language. But if you have to dig too deeply into what language to use, the language probably doesn't actually matter that much.

I was "raised" on OOP, and I "got" it, but I've always preferred more standard procedural programming. I prefer Go over Java, and C over C++.

I like FP more than procedural programming. I prefer Elixir and Haskell over Go.

I like Lispy languages more than other languages. I prefer Clojure over Elixir and Haskell.


Don't stop there.

I thought racket was the holy grail due to the macro system but my real metamorphosis came after learning Haskell. I realize now that my previous preference for Clojure was because I was stuck in prototyping mode.

Once you truly understand your problem domain then types trump macros.

So while I owe a huge debt to Clojure/Racket for rescuing me from my java nightmare, and introducing me to other approaches like Erlang, it was Haskell/Purescript that matured me the most as a programmer.

It was so worth the (considerable) effort, because it has generalized the very nature of computation in a way that just never happened for me in all my years with lisp.


“Types trump macros” is a nice way to put it. Macros are amazing for creating new languages, but a good type system with strong case matching and sum types is just incredible to work with.


I was in the process of creating my own language in racket. It was going to codify everything I had learned over the years into powerful abstractions and target JavaScript.

Halfway though I realized I was inventing a shittier Purescript.

Dsls are mostly masturbation. Custom syntax? Custom evaluation semantics? I always regret it after like cocaine.

The only thing macros are reasonable for is custom bindings.

Most people are not qualified to invent dsls. Myself included. And the people who are qualified don't need "language oriented programming" ala racket. They can spin up a lexer which is the least complicated part of the task anyway.

As for types, they have a HUGE power to weight ratio. All of my efficiency gains in Clojure due to macros were forfeited 10x over hunting down null pointer exceptions at runtime.

I still love Racket. And I still prototype in Clojure. But Haskell brings you closer to math. Closer to category theory.

Most programmers don't need new symbols and semantics. They need a deeper understanding of combinatorial logic.


Racket DSLs aren't the same as 'spinning up a new lexer' at all. Making DSLs is hard, and being a PLT expert does not make one an expert at language implementation. Those are completely different skills (see Brady's reimplementation of Idris on Chez Scheme and the reason for it).

Racket allows people with new ideas to prototype them efficiently by easily implementing a compiler that would've been an interpreter otherwise.


In my experience most dsls are not of the quality of Typed Racket or Scribble. And many of the good languages designed in Racket could be just as well designed if they targeted LLVM. And for 99% of the programmers I've hired they'd be better spending their time further understanding the existing abstractions and research in the many great languages that already exist rather than halfbaking their own because it's so easy and tempting.

As I said, I like Racket. And I agree with you it's a nice environment for playing with new language ideas. But most of those ideas are bad and are just incidental complexity. And the 1 of 100 languages that are great ideas and needed by the world are better off spun out with their own parser and toolchain without being so tied to Dr Racket and Chez Scheme.


> hunting down null pointer exceptions at runtime

That seems mostly a fault of the JVM, no?


Well both. Also in Clojurescript.

Clojure's nil punning and Java's Null Pointers are an unholy alliance here.

But a type system like Elm solves the problem before it ever gets to JavaScript for example.


Types also complect, as Rich Hickey points out. Data is just data. When you constrain it with types you lose something. Witness the pain of making JSON fit the type system in a lot of statically typed languages.


In my experience it's the exact opposite. Good design is precisely about constraints.

A good type system like Haskell's gains you much more than it costs. Sure Json and Clojure are open and flexible, but this is not what you want when you understand your problem and can specify the types are Ohms -> Volts -> Amps etc, instead of just Number Number Number.

Specifying them after in spec or typed racket was just not the same and always rang similiar to unit tests after the fact.

Haskell types is like having a pair programmer sitting beside me. For systems that I have in production that have been ported from Clojure to Purescript the reduction in bugs, refactorability and overall confidence is something I didn't know I was missing until I did.

I understand Hickey's point in his Maybe Not talk, either you have the data or you don't. But at the end of the day, Clojurescript apps subject my users to blank screens and subtle "undefined is not a function" bugs, whereas the Purescript compiler forces ME to diligently deal with those unexpected possible scenarios before I ever ship.


> Witness the pain of making JSON fit the type system in a lot of statically typed languages.

In something like Java, sure. But Java is to static types as BASIC is to structured programming: the execution is so primitive that reasonable people can be misled into rejecting the concept if that's all they know.

Here's a completely painless encoding of JSON in Haskell:

`data Value = JObject (Map Text Value) | JArray (Vector Value) | JString Text | JNumber Double | JBool Bool | JNull`


That friction and vague uneasiness for me is as the nagging thought that it's simply not correct code. In my case I prefer to write as much SQL as reasonably possible to prevent having to write any code in an imperative language.


I’m very much in this camp as well. A significant number of applications can be expressed with sql and a thin layer around it


+1. And the subset of those applications concerned with simply converting result sets <-> JSON over HTTP may well even tolerate use of an API generator (eg PostgREST, PostGraphile, Hasura), reducing that thin layer to merely a membrane.


Interesting. You mean make all the queries as detailed and specific as possible, processing things as much as possible in SQL so you don't have to mess around with the data you get from/put in the DB in an imperative langauge?

Do you have any good examples of this? Code bases that can be read?


Messing with data significantly outside of SQL is often asking for trouble.

SQL queries are compiled into very efficient operations which would take a lot longer to get right imperatively. Not only that, but database engines are improving all the time, so the same code you wrote which declares your desired transformations tends to get faster over time and there is nothing to update or refactor. The transformations you write are sort of timeless because they are strongly decoupled from the implementation and hardware.

Lots of data transformation steps in imperative languages require the persistence of an intermediate calculation (e.g., np.ndarray). SQL database only do this when the query planner deems it absolutely necessary. It will show up in your query plan as a "materialize" step.

The EXPLAIN feature of SQL is the most useful performance debugging tool. It also alerts me to a potential logic flaw quickly when the proposed plan looks insane. I have personally replaced several analytical programs a very large bank used to monitor loan originations and performance.

I don't use any special SQL language features to this day. The most sophisticated it typically gets is involving lots of subqueries, joins, and window functions. The real skill is distilling the convoluted imperative mess into its essence. I got really good at this.

Half the work effort is usually spent on socializing the changes I have to make to their logic to get it working right in SQL when it changes the behavior of their existing application's logic. Often times I find the client's imperative code attempting to perform a logical operation such as a join but it is implemented incorrectly.

Their existing imperative code operations actually produced the wrong results (subtly) frequently or their imperative code depended on the order of the data returned from the database (undefined behavior). Yikes.

What they actually wanted was provably implemented incorrectly or relied on undefined behavior and their mistakes and the proper resolution in SQL could be easily verified with paper and pen on a sample set of loans if necessary to drive the point home.


> Not only that, but database engines are improving all the time, so the same code you wrote which declares your desired transformations tends to get faster over time and there is nothing to update or refactor. The transformations you write are sort of timeless because they are strongly decoupled from the implementation and hardware.

Only true up to a certain extent. Counterexample: performance regressions due to changes in the query planner / its input statistics. Changes aren't always positive and logically equivalent plans can have very different perf characteristics.


Fully agree. I am mostly on the analytical side. When my client uses Snowflake it's usually smooth sailing because it's so automated and performant. When I have my own analytical Postgres instance on my local machine I tune costs and memory consumption parameters in postgres.conf but I only rarely run into major gotchas. If my client uses IBM... I go for a walk on the beach or go out to lunch when I launch my query.

Your point about equivalent plans can have very different perf characteristics is very true. I always try to review the query plan with EXPLAIN if my query takes more than a minute and rewrite the logic if necessary.


Very cool. So you write it with solid, declaritive SQL and you can trust that it will be rock solid and optimizied. Need to learn SQL instead of just doing NoSQL all the time. Thanks for the explanations.


I load JSON into Postgres all the time these days for analyzing data for a government client and use Postgres JSONB operators to untangle and index it. JSONB because I don't care about preserving the original string representation of the record (strings in fields are still completely intact).

Although I heavily lean on psql with \COPY and stdin or stdout pipes to a compressor like zstd (psql can natively pipe CSV to and from any program reliably even in Windows) I found loading JSON records to be extremely frustrating this way.

Whatever you do NEVER use pipes in Powershell. They don't stream. They buffer in RAM fully into your computer crashes. Microsoft is insane.

Since you use NoSQL you can write a very short Python program that uses psycopg2 directly to load a list of dict as JSONB rows into a single Postgres table with one column (I call mine "record")

At that point you can basically use Postgres as a NoSQL database and structure the records if you want using a VIEW.

We're in the process of documenting their use of JSON for financial records keeping as a design defect.


Their existing imperative code operations actually produced the wrong results (subtly) frequently or their imperative code depended on the order of the data returned from the database (undefined behavior). Yikes.

That just sounds like very sloppy programming.


I'm in agreement with GP; I work daily on code that does it the traditional way, and I use the "do as much as possible in SQL" in my own side projects, and it always seems more maintainable to push off all the data logic into SQL.

> Do you have any good examples of this? Code bases that can be read?

I don't know of any projects that do this, but I do this for many of my side projects.


Keeping the business logic in-database was a major move in my career and a huge source of « well done sentiment ». My perfect combo today rely on golang as the (thin) API layer, querying data and making CRUD using plsql procedures, enforced by triggers and constaints.


> Early in my career, I thought my lack of "getting OOP" was due to some profound missing piece of understanding

It simply does not have formal definitions and examples. Almost everything in OOP books and content is defined like "imagine you have this situation and you want to do this thing..." and then it starts with ships, containers, ports, cranes or animals and lions.

The thing I like about fp is that there's surprisingly little things to know, and they all have formal definitions. The definitions are also quite simple and build on previous definitions. If you know what a data type and a `map` operation is, you can easily understand a functor. You add to it another ingredient and you have an applicative functor. Another one, and you get a monad.

You make small steps towards knowledge that are solid.


syntax, imperative statements and classical object verbosity and inappropriate structure always hurt me

whenever I read a lisp / ml book, I sweat a bit and then I gain superpowers and insights, whenever I read mainstream OO books I just get a neverending stream of "please stop.. why ??"

and it's not only lisp to be frank but a mindset that lisp embodies a lot.. even using an hp48 felt better than basic+algebraic programming calcs .. (at the time I had no idea what was RPL)


Me too. I've also been trying to learn more about OOP and reading "Smalltalk, Objects and Design" bc I heard a lot about how Smalltalk is the real, good OOP. But there were points I just felt squeamish, like when they were talking about the danger of the difference of effect vs. output on methods where I felt anxiety and thinking "ooo, why would I wanna do that??" I can just feel my code getting messier...

Some ideas like using polymorhism to avoid branching conditionals was interesting though and I'm sure there's times it can be helpful.


For me, it's the opposite, I like Lisp and Scheme because they don't force me into one programming paradigm. I'm not very fond of FP but used OOP almost in every project when I was programming in Racket (for a decade or so). I often wrote more functional code first and then wrapped it into classes to "assemble" it under one roof.

Lisp and Scheme are great languages for object-oriented programming.


My comments where not at all to make a statement about FP vs OOP, for me they both have their place, and I agree that Lisps are great OOP languages. It just turned out for me that FP came more naturally as a new programmer, and learning about the subtleties of lambda the ultimate, lexical scope and state from an FP perspective helped me to finally feel like I grok OOP. I was pleasantly surprised to see someone else express a similar sentiment.


I am not against / for any reasonable paradigm. I use multiple however I see fit depending on particular situation. Never wanted to be a "purist".


Indeed. Sometimes it's more obvious to say "here are the steps to make this thing, now go" and sometimes it's more obvious to say "here's a description of the things I want."


My experience has been exactly the same.


Show me a GUI framework that follows FP patterns not OOP.


This is of course a silly but valid criticism of a post claiming FP is better than OOP in every situation. As others have pointed out, there are functional GUI frameworks, though of course something state-heavy is going to be more naturally modeled with OOP in some cases.

But that is all beside the point- OP specifically did not claim FP was always better, just that they found it more natural. That personal and subjective experience (which I share) is utterly orthogonal to the relative maturity of GUI libraries! The fact is, most GUI frameworks are OO, and the ones I learned were all OO. And I hated them! I am not a person who enjoys making GUIs, and that's in large part because I am not a person who enjoys OOP. When I learned my first functional language, it felt like a breath of fresh air. While I was a perfectly serviceable Java and Python programmer, Haskell just made sense immediately. That doesn't make Haskell a better language in absolute terms, but it was much better for me.

Now that I've worked with functional languages, I find I actually don't mind imperative languages at all. I don't really need 100% of the bells and whistles to write composition focused code that meshes with my brain, especially since just a hint of state here or there can free you up to do the rest of it statelessly. And now I find that as I build interfaces with LiveView or Vue that I actually do enjoy making GUIs- even if there is a little state here or there.

People, and this might come as a shock, are sometimes different from each other. And sometimes their talents don't match up perfectly with whatever task you feel is important in the moment. There's lots of programming which isn't as state-heavy as a GUI, just as there's lots of programming more state heavy than a data processing pipeline.


Off the top of my head:

  * React
  * Elm
  * Halogen
  * Phoenix
  * Phoenix LiveView
  * Re-frame
  * Reagent


React is using JavaScript OOP, with the instances of JavaScript objects for data types and DOM manipulation, it doesn't exist.


React moved away from OOP quite a while ago. It's mostly pure functions and closures that return what are functionally records.

When the JS tuple/record proposal is finally implemented, I suspect they go even farther in that direction.

Of course, you could argue that closures are simply objects with one method (or that objects are a poor man's closures), but that's a rather pedantic line of discussion.


React is written in JavaScript, a OOP language with prototype inheritance similar to SELF, it doesn't get more OOP than that.

Take anything that has object semantics on the JavaScript VM out of React source code, and it will be reduced to almost nothing.


The fact that JS has OOP doesn't mean your code relies on those OOP features in any meaningful way and MOST modern code does not.


JavaScript functions and basic data types have member functions and object prototypes, so...

It is like writing React in Smalltalk and arguing that because the code uses code blocks it isn't OOP.


Swiftui


Plenty of Swift classes and method dispatch going on there.


Jetpack Compose (Android)


Plenty of Kotlin classes and method dispatch going on there.


Aside from the all the FP-style GUI frameworks others have listed, it's also worth noting video game development adopting "data oriented design" [0], rather than OOP (usually in C++).

[0] https://www.youtube.com/watch?v=rX0ItVEVjHc


In this case FP is even worse (you need to copie information to use it), I don't feel this is a case of which is about paradigms but more of make paradigm that push the limits of the hardware


A smart enough compiler will eliminate 90% of unnecessary copying and things like linear/affine types will eliminate the rest




I wonder if dearimgui would count or if it's just imperative


I would put it more on the declarative side of things than functional or imperative. It does feel like a better fit to imperative programs though where execution is moving through the code as opposed to functional programming where expressions evaluate to results.


The article mentions X-expressions for XML and HTML. If you decide to play with that, you might also want to look at another representation, SXML, which has more advanced library support in the Racket ecosystem.

https://docs.racket-lang.org/sxml-intro/index.html

(Basically, early in Scheme, there were a bunch of independently-invented representations for XML. It turned out that Oleg Kiselyov's XML parsing work, SSAX, was better than anyone else's. So I converted my libraries to be compatible with his SXML representation, and others also built tools using SXML. But core Racket (then called PLT Scheme) seemed stuck with X-expressions. Today, one of the core Racket professors maintains the Racket packaging for Oleg's SXML libraries, but it's contrib rather than core. People starting with the core docs might only hear of X-expressions, until they start searching the contributed package directory, and find those packages generally use SXML.)


And shout-out to you, Neil, for maintaining a lovely HTML parser for Racket. You recently pushed an update to help me with a project of my own.


Thanks, it's always nice to hear things like that.


Related:

Why Racket? Why Lisp? - https://news.ycombinator.com/item?id=28966533 - Oct 2021 (2 comments)

Why Racket? Why Lisp? - https://news.ycombinator.com/item?id=19952714 - May 2019 (122 comments)

Why Racket? Why Lisp? - https://news.ycombinator.com/item?id=15473767 - Oct 2017 (1 comment)

Beautiful Racket: Why Racket? Why Lisp? - https://news.ycombinator.com/item?id=13884234 - March 2017 (7 comments)

Why Racket? Why Lisp? (2014) - https://news.ycombinator.com/item?id=9268904 - March 2015 (164 comments)

Why Racket? Why Lisp? - https://news.ycombinator.com/item?id=8206038 - Aug 2014 (280 comments)


I think Racket is pretty awesome, but I will admit that I think Clojure is a ten times easier sell. The value of being able to interop with existing JVM languages really cannot be overstated, and it makes employers substantially easier to buy in.

That said, when I'm doing personal projects, I don't particularly care about what my employers think...but I still prefer Clojure. I think the native hashmap data type makes it just an exceptionally appealing Lisp, and I find that the language never gets in my way.


Abcl should be an easy sell then as well.


The bottom line, made explicitly by this commentary (although attributed to only one component), is that various good design choices (improved over its over 60 years!!) make lisp very close to the language of thought and of natural expression. You can almost turn the natural expression of your computation directly into lisp by judicious use of parens and reading the phrase “of the” between function names and arguments, and where you can’t do this, that’s a hint that you need a macro.


From the article - "[2021 update: I no longer contribute to Racket due to abuse & bullying by the project lead­er­ship. Everyone in the broader Racket commu­nity, however, has always been helpful and kind.]"


Let's not derail the discussion.

To be clear, the author still continues to use Racket. His statement is more about his contributions and evangelizing.


maybe the author was the bully? i don't see how it's helpful to perpetuate hearsay especially without any context.



He actually attributes malicious intent ("campaign") to this other person. Based on no evidence whatsoever, just feelings apparently. The apology was gracious, forthright, and entirely believable. The guy is uptight and blows up. Does that make him a "bully" who "campaigns" to harrass other people?


There is context, if you just open the article and see the link in the same sentence. But I guess is asking too much


Maybe when the same URL is posted again in HN, the platform should automatically insert a first post with links to the previous discussions, as someone did here. It’s not irrelevant to repost, but it could be useful to have pointers to previous conversations. Similarly, the platform could warn the poster that its a repost (a bit like stackoverflow does, but simpler - just using link equality).


You can just click "past"


Oh. Hey! Learn sumptin new ery day! I never even noticed that button! I’ll take the positive view of, great minds etc (as opposed to the more correct view that I’m an idiot! :-)


It can be difficult to explain why Lisp is so great to non-Lisp programmers. For Lisp, the whole is greater than the sum of the parts.


This might help https://malisper.me/category/debugging-common-lisp/

I am not aware of another lang/platform that can offer this kind of flexibility except may be Smalltalk or Erlang, but then they don't have the homoiconicity.


This article just kind of did that for me. Loved that they likened Lisp to Lego blocks a few paragraphs after I had the exact same thought (when they mention that "everything is an expression").

Even after having read Hackers and Painters and some of Clojure for the Brave and True, this is the article that makes the power of Lisp click the most for me.


That's true of Erlang as well. The language (and VM) design elements are very complementary.


After years of reading Lisp articles, my take is that the macro system is what sets Lisp languages apart. I was surprised then that the author didn't find them that useful.

[Paul Graham] describes Lisp as his “secret weapon”. OK, so what’s the secret? He says “program­ming languages vary in power”. Fine, but what exactly makes Lisp more powerful? ...Lisp’s macro facility, which he describes as its ability to make “programs that write programs”. After four years using a Lisp language, I’d agree with Graham that macros are great when you need them. But for someone new to Lisp languages, they’re not neces­sarily a bread-and-butter benefit.

Further down...

Paul Graham calls Lisp a “secret weapon”. I would clarify: Lisp itself isn’t the secret weapon. Rather, you are—because a Lisp language offers you the chance to discover your poten­tial as a programmer and a thinker, and thereby raise your expec­ta­tions for what you can accom­plish.

What's the secret weapon if it isn't macros? I know what I need from a language that matches my particular quirks as a programmer and helps me shine and it's not Lisp.

This article made me interested in Racket's libraries because it sounds like they might be more interesting than other languages libraries.


Not everything is about secret weapons... it is just simple and clear to use S-expressions, people like simple and clear.


I don't think that macros are the singular thing that sets Lisps apart. I do think that they're an extremely useful feature to have, especially if you build programs and systems by building languages in which to express them.

Lisp's macros are a way to define a syntax that you would like to be able to use, and arrange for Lisp to rewrite that syntax into something that Lisp already supports.

Two circumstances where they come in really handy are:

1. You have to repeatedly write a bunch of boilerplate that never changes, except for maybe a few small particulars. You can write a macro to turn all that boilerplate into a succinct expression that writes the boilerplate for you. As an example, DEFCLASS can be implemented as a macro that writes all of the low-level machinery of setting up a class and all its appurtenances so that you don't have to.

2. You need to write some code that is error prone in some way, and you want to ensure that relevant protective or recovery measures are never ever forgotten. You can write a macro that writes the protective or recovery measures for you around the error-prone code. As an example, WITH-OPEN-FILE is a macro that ensures that an opened file is properly closed, no matter how control exits the WITH-OPEN-FILE form.


4. In Common Lisp, you can intercept the macroexpansion process with the macroexpand hook. This special variable, when bound to a hook function, allows macro expansion to be arbitrarily modified. All sorts of modifications to code can be performed in a dynamically controlled way without changing the source code being compiled.

http://www.lispworks.com/documentation/lw50/CLHS/Body/v_mexp...


3. You want to do something that in another language would require a separate preprocessor. You can do arbitrary computation in the macro, so it becomes a preprocessor.


Yep. Good point.


> I know what I need from a language that matches my particular quirks as a programmer and helps me shine and it's not Lisp.

Yes, but the argument is that, once you've experienced Lisp, your quirks as a programmer completely change. It might not be so much that you need to decide whether you need macros. Instead, decide whether there might be a different approach to writing code that could let you express yourself more naturally. See PG's essay on Programming Bottom-up.[1]

[1] http://paulgraham.com/progbot.html


I took the first quote as you need to hold a certain amount of comfort with lisp before macros matter, not that they aren't an incredible once you understand everything well enough to use them.


A couple of people were nice enough to take the time to explain Lisp macros to me in case you're interested: https://news.ycombinator.com/item?id=32730470

I can definitely see their power, but I don't have any ideas on what I would do with them. I guess I'd have to write Lisp for awhile to figure that out.


One way to think about macros is custom syntactic sugar that you can build yourself. So for example, I don't think rust has parameter array (see params/param arrays in c# for an example [1]). So for things like their println that takes variable number of strings to inject into the string where it finds {} it uses the println! and format! macros. Because the parser doesn't support the sugar of a comma separated list and turning that into an array the way c# does (or however other languages handle the same idea), the macro pass of the compiler instead generates the code necessary to make that number of arguments work.

There are likely other uses I've never considered, but that is a simple one.

[1]: https://docs.microsoft.com/en-us/dotnet/csharp/language-refe...


This article is an appendix for the online book Beautiful Racket, and most of the book is about how useful macros can be. It's a very accessible book - not at all advanced.

A number of folks have listed the book as one of their best all time programming books. I finally read it last year, and I agree with that characterization. It's a very well written book.


This is the case for me as well. I was learning Common Lisp for fun, and I loved it, but macros never clicked for me. I could understand how they're powerful, but I never developed a sense for when using a macro would make things easier. Probably just not enough time with the language.


> ((if (< 1 0) + ) 42 100)

This gives me an error in SBCL. Error below:

  ; in: (IF (< 1 0)
  ;      +
  ;      *) 42
  ;     ((IF (< 1 0)
  ;          +
  ;          *)
  ;      42 100)
  ; 
  ; caught ERROR:
  ;   illegal function call
  ; 
  ; compilation unit finished
  ;   caught 1 ERROR condition
Why does it not work and how to make it work?


See moonchild's response for the why, here is the how: (funcall (if (< 1 0) #'+ #'*) 42 100)

Edit: this is actually a pretty famous interview question for lisp jobs: write a program that is valid in CL and in scheme but produces different output. The solution is always to use the fact that scheme is a lisp-1, i.e. it has ONE namespace for variables and functions) and CL is a lisp-2, with TWO distinct namespaces.


Finding out about lisp-1 and lisp-2 when hacking in elisp turned me off so much I've decided to write my own editor in Scheme. This is my personal crusade for the next X months.


How about using Neovim and writing scripts and configs in Fennel? https://fennel-lang.org/


vim is great but Emacs is on a whole other level for me.

But I would love to spend some time to write a löve2d game in Fennel, for sure.


I have yet to find a use-case where I really want a lisp-1 over a lisp-2. Though I'd really love to see some production code that exploits the single namespace.


It's kinda nice when you want to, say, name a list "list" despite there being a list function. But I agree, it isn't super important.


The thing is that Lisp-1 doesn't prevent you from naming a variable lisp; you may then have a problem if you try to use the list function in the same scope.

It is a very important issue, because it interacts with hygiene in the face of macros.

Lisp-1 dialects, and more broadly, single namespace languages, require hygienic macros if they have a macro system.

Hygienic macros systems are gross; not everyone likes their complexity. They have referential transparency comes at the cost of sacrificing implementation transparency.

A user of a Lisp-2 language can pretend that hygienic macros don['t exist, and be happy for the rest of their days.

The reason is that if you do this in a Lisp-2, you're safe:

   (let ((list '(1 2 3))
     (opaque-macro list))
Suppose opaque-macro has an expansion which contains (list ...) calls. Those are unaffected by the variable. opaque-macro would cause a problem if it tried to bind a variable called list. That is taken care of in unhygienic macro programming by using explicit gensyms.

There could be a problem if there was a list variable that opaque-macro's expansion depends on. For that we have (1) nameing conventions like *list* for varaibles and (2) package system: macros coming from a foobar library, such that the variable they reference is foobar::list, unrelated to the user's (let ((list ...))).

The whole Lisp-2 + unnhygienic macros (+ packages) is easy to work with and understand, simple and pragmatic.


I think you have it backwards. That's why you'd want a Lisp-2 over a Lisp-1.


That's what I was describing, yes.


Common lisp has separate function and value namespaces, unlike scheme. Try (funcall (if (< 1 0) #'+ #'*) 42 100).


I swear, Lisp fandom is the Amiga fandom of programming languages. You have these dedicated supporters who are willing to go to extreme, Herculean lengths to prove that their chosen thing can not only keep up, but in some sense surpass what is popular, combined with nostalgia about the past and "what could have been". In the distant past, Ph.D.s have been awarded based on such feats, like "Look, we can compile Lisp code to run half as fast as Fortran" and "Look, we designed a chip that runs Lisp as a native instruction set". Simultaneously ridiculous and cool, the very picture of hack value. But you're still engineering around the drawbacks of a slow, dynamically typed language.

The Lisp machines were amazing in 1979. But no one is seriously building a Lisp machine today, because modern languages with modern IDEs on ordinary hardware have nearly all the features that Lisp machines had -- including hot-reloading code inside a live, running system -- and get you so, so much farther in the same amount of time than Lisp machines ever could.

Use a modern language inside a modern IDE, and you'll be fine.


Is there anything wrong with Amiga fandom? Amigas were cool as hell. They deserve their cult following. As far as LISPS, they’re cool as hell too. Just because they’re not up to par with corporate backed modern languages doesn’t mean you should use them for the same purposes. Be a bit open minded for a change and you’ll learn a thing or two.


I own an Amiga and use Lisp all the time. But the Amiga's coolness peaked around 1990; Lisp's coolness peaked in the late 70s or so.

Lisp still feels great to use, but it doesn't really have an advantage over modern programming languages the way it did over languages from the 70s/80s, or even stone-knives-and-bearskins Unix development without an IDE. Many of the killer features of Lisp machines could be found in 90s Visual Basic.

It kind of parallels the Amiga situation: by 1994 PCs had closed the gap, offering more CPU power and even a better gaming experience than the Amiga but diehards still keep comparing their boxes to the 5150 PC with CGA to feel superior.

So if you want to use Lisp because it feels good to use -- it's fun to drive like an MGA is -- that's fine, but in the end you're no better off with Lisp than you would be with C# or TypeScript and in some ways you're worse off.


You are simply trolling. People working with Lisp today are not doing tinkering in a retrocomputing context in order to recreate past experiences --- though there will always be a few of those hobbyists too, to be sure.

It is not comparable to Amiga because you can deliver a solution in Lisp on the users' current platform, and they don't even know or care what you used. You can't put an Amiga solution into production such that the users don't know you're deploying Amigas (unless it's some emulated Amiga VM hidden under a hood or something and not actual hardware).


Like Clojure with VS Code or Cursive/IntelliJ, you mean? NuBank also seem to be doing OK with Clojure.


Sure, why not. Clojure seems to be the only Lisp that's willing to learn from modern languages anyway.


I love racket. It helped me understand the concept of limited continuations and macros.


Comment on the functional programming and mutation of data. I learnt R before learnt python.

In R

y = c(1, 2, 3)

x = y

# now x is a copy of y

I was surprised that in python

y = [1, 2, 3]

x = y

# now x is y! What, why?

In R, I am used to the fact everything is an expression:

a = if (x > y) { 0 } else { 1 }


R has call-by-value semantics. Python has call-by-reference. These are entirely two different programming paradigms.

https://stackoverflow.com/questions/15759117/what-exactly-is...


Python is not call-by-reference. If it were, we would expect 2 to be printed below, but it is not:

  >>> def f(x):
  ...     x = 2
  >>> x = 1
  >>> f(x)
  >>> x
  1
One might refer to r as being referentially transparent, or immutable (I don't know if it is--I am assuming). I would refer to python as using uniform reference semantics - http://metamodular.com/common-lisp-semantics.html


What does that Python mean, by the way? The passing of the value into the function is clear enough, but I have no idea whether f is assigning to the local x, or binding a new one.

If we capture a lambda before "x = 2", and then call it afterward, does it see an x which holds the original argument value or which holds 2?

In Lisps, this stuff is clear. Mostly. Except Common Lisp has some areas where it waffles: in some iteration constructs, implementations can bind a new variable or reuse the same one. Those choices are individually clear, though.


Creating a new one. It isn't a closure, so:

  x = 3
  def f(x): # note the name here
    x = 20
  f(x)
  x # => 3
Similarly, using a different parameter name:

  def g(y):
    x = y
  g(20)
  x # => 3
You have to explicitly mark the `x` in the function to be the global one to reference it:

  def h(y):
    global x
    x = y
  h(10)
  x # => 10
With a lambda it would be a closure:

  i = lambda y : x # throwing away y for fun
  i(3) # => 10
  x = 20
  i("hi") # => 20


No, I mean this:

  def f(x):
    g = lambda : x
    x = 2
    return g()

  f(42)
What is does f(42) return?

Holy fuck, I simply cannot just cut and paste the above into Python as-is, because of the "unexpected indent" which I added so that HN shows that as quote. It's complaining about an unexpected indent.

Kill. Me. Now.

Anywway:

  >>> def f(x):
  ...   g = lambda : x
  ...   x = 2
  ...   return g()
  ... 
  >>> f(42)
  2
So what it looks like is that there is only one x in the function, established by the parameter. This x is captured by the lambda, and is then reassigned the value 2. The lambda retrieves the 2.

For completeness we show that if x is not assignmed, g accesses f's argument value:

  >>> def f(x):
  ...    g = lambda : x
  ...    return g()
  ... 
  >>> f(42)
  42


`f` behaves just like closure. It can even be assigned to a variable.

    def f(x): x = 2 # <- this `x` is the same `x` as in the argument, it can be access via locals() internally

You can even assign `f`. For example, `function = f`.

Python is call by reference. Change my mind. `def f(x): x[0] = 1` will manipulate whatever object you pass to it.


If Python were fully call by reference then:

  def g(y):
    y = 3
  x = 10
  g(x)
  x # => ??
If it's 3 then it's pass by reference here, if it's 10 it's pass by reference. Which is it?

Additionally, for something to be a closure it has to close over something (an environment). What does `f` or `g` close over? Note that they aren't changing any environment, they are "merely" functions, not closures. Python does have closures, but those aren't examples of them.

And being able to assign a function to a variable does not make a closure, or do you think that C has closures because it has function pointers?


Oops:

If it's 3 then it's pass by reference here, if it's 10 it's pass by value. Which is it?

(Don't hit reply when you're rushing out the door.)


> Python is call by reference. Change my mind.

No need to change your entire mind; just an incorrect definition of call-by-reference.

Passing a value which has reference semantics isn't call-by-reference.

Your f receives x by value. That value is a reference into a boxed object. The x[0] = 1 does not change x; it changes the boxed object.

  >>> def f(x):
  ...    x[0] = 1
  ... 
  >>> a = [ 0, 2, 3 ]
  >>> b = a
  >>> f(a)
  >>> a is b
  True
The f function can do nothing to make "a is b" false.

Under pass-by-reference, assigning to x would do that.

Under pass-by-reference, we can pass a reference to a, such that a can be replaced, without affecting b.

"pass-by {whatever}" refers to the semantics of the parameter and what it receives from the argument expression, and how, not the semantics of the argument object/value.


I don't expect 2 to be printed. I think you're conflating scope with calling conventions. x is locally scoped to the function. `f` behaves like a lambda/closure.

Would you expect `(define f (lambda(x) x))` to have awareness about some global `x`? The only difference with Python is that the closure has access to the global scope unless the variable is locally defined and a new reference is created in the frame.


You're trying to demonstrate dynamic scoping.

Python has lexical scoping and neither set of scoping rules is directly related to passing by reference or value.


Nope, that code actually does demonstrate the difference between pass by reference and pass by value. If you prefer, change either the `x`'s in the function to be `y` or the global `x`'s to by `y` and you'll see the same behavior. Which shows that Python is not, in fact, pass by reference.


I think I see what you are saying (the shared variable threw me), but it STILL isn't doing what you think.

    def s(x):
        x[1] = 2222
        return x

    x = [1,2,3]
    print(s(x)) # prints [1, 2222, 3)
As you can see, it is pass by reference.

Tuples, strings, numbers, and some other things are immutable.

When you pass them, you pass a reference (from the user's perspective as this can be optimized to pass by value as the interpreter sees fit). When you update that reference, because they are immutable, you must update the current reference to a new location in memory. Because you are changing what the new pointer variable in the function points to, of course the other pointer doesn't update.


> As you can see, it is pass by reference.

No, it's passing a reference, but it is still not pass by reference, this is actually worth distinguishing because they are different behaviors. That function (by moonchild) does exactly what moonchild meant it to do, which was to demonstrate that Python is not pass by reference. Another function demonstrating that Python is not pass by reference:

  def swap(a, b):
    a, b = b, a

  a = [1,2]
  b = [3,4]
  print(a,b)
  swap(a,b)
  print(a,b)
If Python were pass by reference (versus passing a reference in these cases) then it would perform a swap, but it does not perform a swap because it is not pass by reference. C++ has pass by reference (as an option), as do Pascal and Ada, and more languages. But Python is not among them. An actual pass-by-reference swap, in C++:

  template<typename T> // template to be the most equivalent to the intended Python code
  void swap(T& a, T& b) {
    T t = a;
    a = b;
    b = a;
  }
That will actually perform a swap when called with `swap(x,y)`. Any language that offers (by option or by default) pass by reference can have an equivalent swap function, including a generic one like the above if the language offers some notion of generics or is dynamically typed (like Python).


Yes, pointer’s are themselves passed by value (no pointers to pointers) due to garbage collection where the ability to directly alter them would cause crashes (like basically every C++ program does if you spend enough time fuzzing it).

That doesn’t change the fact that they are indeed pointers (aka references).

How are pointers to pointers passed? You don’t want to reference another stack frame directly, so you pass by value.


>>> As you can see, it is pass by reference.

> Yes, pointer’s are themselves passed by value (no pointers to pointers) due to garbage collection where the ability to directly alter them would cause crashes (like basically every C++ program does if you spend enough time fuzzing it).

I'm going to be frank, you're arguing both sides here which is very frustrating. You've said that Python is pass-by-reference, that it passes references, and now that it is pass-by-value (I think that's what you're trying to say). So do you believe that Python is pass-by-reference or not? If you believe it is, and if mine and moonchild's functions demonstrating the opposite aren't persuasive, here's the Python FAQ itself saying that it is not, in fact, pass-by-reference:

https://docs.python.org/3/faq/programming.html#how-do-i-writ...

> Remember that arguments are passed by assignment in Python. Since assignment just creates references to objects, there’s no alias between an argument name in the caller and callee, and so no call-by-reference per se.


Because it is expensive to copy a large container? Which is why you want immutable data structures, in which case copying is superfluous. So the problem with Python is not the aliasing in assignment, rather it is the mutability of the objects that could create unpleasant surprises.


(2016), according to the comment in the source


Enough praise for FP. Now let's go back to real world.

Please help tell me what's the best way to replicate what's good in OOP, like how to use FP to solve expression problem ?


> like how to use FP to solve expression problem

Some common approaches:

- Tagless final style: https://okmij.org/ftp/tagless-final/

- Datatype-generic programming: https://markkarpov.com/tutorial/generics.html

- Polymorphic records and variants: https://www.cl.cam.ac.uk/teaching/1415/L28/rows.pdf


I'm a very big fan of the "object algebras" approach [0], which is essentially tagless final minus the automatic instance resolution you get with typeclasses. It works great even in fairly restrictive type systems like Java's.

[0]: https://www.cs.utexas.edu/~wcook/Drafts/2012/ecoop2012.pdf


Expression problem exists in both OOP and FP[1]

[1] https://wiki.c2.com/?ExpressionProblem


I can understand the pushback against FP, but it doesn't follow that if FP is a poor choice, then the only realistic alternative is OOP.

I do a lot of Python stuff - much of it is not functional programming. And very little of it is OOP.

The programming world doesn't consist of just these.


These are the dumbest type of comments. People are doing real world work in FP, but because it's not the work you do (or you're not comfortable doing the work you do in FP) you're deluded to thinking that no real work gets done in it.


He's a Fullstack JS developer which is even dumber because he uses React which is basically the defacto JS front end framework.

React is trending heavily away from OOP, it's creators heavily buy into FP, and most of the patterns in react are inspired heavily by FP. Functional components are now the way to go.

ReasonML (a pure FP language) is react as intended and is the northstar of React.

To learn more here is a video from Jordan Walke creator of React (and ReasonML): https://www.youtube.com/watch?v=5fG_lyNuEAw

I recommend parent commenter to learn more about the very tools he himself works with before disparaging FP and himself.


OOP is a mass delusion not dissimilar to religion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: