Hacker News new | past | comments | ask | show | jobs | submit login
Why I still Lisp (mendhekar.medium.com)
284 points by lycopodiopsida on Jan 31, 2021 | hide | past | favorite | 226 comments



I think Lisp is going to live forever as a niche tool for people who "get" it.

I use Clojure on a daily basis. Not necessarily because it is best Lisp, but rather because I am working with Java applications and being able to reuse the same Java code I have already developed is a huge boon to me. If I was able to choose, I would be using Common Lisp.

The way I use it is to quickly develop adhoc tools and PoCs and more rarely small UIs in re-frame.

Working with Lisp is a joy for me and I always get this feeling of frustration when I have to go back to Java, especially if I have to translate to Java what I just prototyped in Clojure.

Unfortunately, where I work (financial systems for financial institutions), I have trouble finding enough people, mature enough to be able to even propose working projects in Clojure.

Close in my org structure there is a set of Clojure projects where the company got badly burnt. The code is a mess and a huge headache for management. The guys who wrote it were either promoted (claiming outstanding work on the project but not wanting to maintain it) or left. Now new developers are barely able to change anything without blowing it up.

This underlines the fact that Lisp projects can very easily end in spectacular disasters. You have to be really incompetent to write a typical Java service in a way where it is no longer possible to develop it at all, but in Clojure it is just enough to get couple of people that are intelligent enough to write macros but not experienced enough to understand the dangers of lack framework forcing the structure of your application.


Honestly I’ve seen just as many disastrous codebases written in highly structured, statically typed languages. Any language with intrinsically interesting features tends to attract inexperienced (or just bad) developers who don’t appreciate the tradeoffs of their tools and design decisions, and think the tool is a panacea. This leads to disastrous codebases, and is not at all limited to lisps or dynamically typed languages. I can point just as many disasters written in statically typed languages, and they tend to be the “interesting” ones with fancy type systems in the ML family like Scala, Haskell, and Rust. (I am a huge fan of both Lisps and ML-family languages.)

I think the important point is to be very aware of this risk and consider it when making technical decisions. Sometimes the tradeoffs are worth it to use a powerful tool, depending on the team and the product, and sometimes you’re better off using boring tools that won’t distract smart engineers.


I 100% agree that disastrous codebases are every bit as likely in other languages. But it somehow hurts more in a lisp.

I think that the syntax really is the culprit. In a language like Java, the Byzantine syntax and semantics enforce some minimum level of structure on the code. It's not much, but it's something, just enough to give an experienced programmer a few extra heuristics they can use to make sense of what they're seeing. Even some things that normally drive me nuts about Java, like the way that there's no standard way to call an anonymous function (because the invocation is done through a method whose name is allowed to vary) end up being useful bread crumbs when I'm reading somebody else's code.

In a Lisp, though, everything looks more-or-less homogeneous. You can memorize the names of special forms, but, beyond that, you can't quickly tell whether some thing you're looking at is data, a function, or a macro, or what. It all blends together into a disorienting fog. You need to go trace its provenance to see how some new value is defined or constructed before you can even tell what kind of thing it is. (This, tangentially, might be a great argument in favor of lisp-2, though I haven't spent enough time in a lisp-2 to have an opinion that's worth beans.) It may, in the grand scheme of things, be only a small reduction in what you can assume without careful study. But, in an unfamiliar codebase where you don't know much to begin with, every little scrap of knowledge counts.

It all points to an insight I should have had 20 years ago when I was working in Perl: Features that make a programming language pleasant to write tend to make it unpleasant to read, and vice versa.


I spent a lot of time thinking about this.

What makes Perl or Clojure such fun languages to work with is they don't impose structure on you. You can do whatever the hell you want.

On the other hand result of this is the code represents basically how you think about the problem. Which would be very different for every person.

Languages with frameworks like Java+Spring, Ruby+Rails etc. are less "fun" to work with because you are not given so much freedom and likely have to fill in a lot of stupid, mundane boilerplate, but are easier to maintain in team setting because developers know more or less what to expect from the code. Very likely developers come with that knowledge and if you know one Spring application you know how to move around almost any Spring application.


It is the frameworks that give more rigidity and boilerplate, not so much those languages themselves (although Java does have more boilerplate than Ruby). You have a lot of freedom when you program in plain Java and Ruby vs. using frameworks in those languages. And that is at least partly by design. See the Template Method design pattern, which is the basis of frameworks that use Inversion of Control (IoC).


Clojure frameworks are written by Clojure developers which means they don't give you structure, only tools to realize your freedom.

Frameworks that give a lot of structure are beneficial to novice developer because they need that structure. In absence of supervision, the various articles on the Internet, stack exchange answers on how to write a controller or other piece of Java app, give a novice developer necessary guidance on how to structure their code.

On the other hand as a more mature, knowledgeable and experience developer you may see that this structure is not perfect. It is repetitive and it has a lot of boilerplate and it does not suit well every application.

You may even figure out that every application has their own perfect framework, depending on its size and other characteristics and the problem it is trying to solve.

Clojure lets you design that framework and then write the application in perfect framework for your application.

That framework might be some zero-code decisions (like decisions on where to put which part of the code) but can also be some set of macros up to full blown DSL.

Assuming you are mature developer, you will know how to use these to solve your problem but if you don't then that's where the problem starts.


Yes, frameworks are kind of like a smarter programmer starting you off with a code base already, so if you don't really know what you're doing you can't mess it up as much. I've still seen people mess it up, and generally it's when the developers begin to "play" with things they don't understand, like bringing in Aspect J, writing custom annotations, slowly moving logic to configuration files, starting to dynamically load modules in/out.

It's a bit of a struggle because an experienced dev will hate the framework, as they know better, but then as less experienced dev work on the code base it'll degenerate in a way the framework would have survived longer due to availability of documentation and Stackoverflow Q/A.


There are two more points.

If you are tech lead or manager, these frameworks are really great value because they "give" (or more like let developer find) guidance on a lot of aspects of the application.

If you created your own perfect framework, you would be responsible for providing a huge amount of that kind of guidance, but if you are using Spring your every developer can just google the answer to most (or even almost all) of the problems.

And the second point is, again, if you are tech lead or manager, these frameworks are great help preventing your "highly intelligent" developers from making a mistake and developing their own framework. I put quotes intentionally, because some developers just think themselves to be very good developers but dismiss the risks and costs of developing software that is not core part of the project (aka NIH).

With Spring regulating most aspects of the application any damage from those kinds of actions is also limited.


> mess it up [...] like bringing in Aspect J

And, see? That's from a Lisp guy. QED.


> What makes Perl or Clojure such fun languages to work with

... while you're writing a small program by yourself ...

> is they don't impose structure on you. You can do whatever the hell you want.

You mean, there is nothing _preventing you_ from doing whatever the hell you want. That's not the same thing.


It feels like Lisp scales vertically and Java scales horizontally.

Java scales for number of developers, its idioms and ecosystem make it easier to have tons of developers on a single project.

Lisp scales on the size of problem that a small close knit team can solve.

Things like the macro system in Lisp encourage defining your own problem specific DSL, which is great for individual developer productivity, but bad for involving lots of developers.


Like the way those defclass and defmethod macros totally destroy programming at large.


It may take a while to see the visual clues: indentation, formatting, structure patterns. Beyond simple lists, Lisp uses a variety of list structure patterns for language constructs. It's a bit like learning to ride a bike: initially it looks not possible to balance, steer and move forward at the same time.

One thing that's not that usual is that authors can implement new language constructs themselves. Thus one may need to understand that meta-syntax level when reading and writing new constructs. A starter book like SICP thus does not use this prominent feature: it does mostly NOT define & use macros.

Authors need to learn how to design new macro operators.


I think it is important distinguish the necessary from accidental complexity.

Macros are typically more difficult to understand but if done well that would be because they are sinking complexity from a bunch of code.

For example, if a macro implements variations of repeating construct, you are removing those variations from your entire codebase and putting complexity of dealing with that into a single macro.

Now, the issue is when you start creating accidental complexity.

For example, there are various techniques that reduce overall complexity just by being consistent in how you do things. Using only hygienic macros or writing macros consistently in a way that allows the reader predicting what they do reduces a lot of perceived complexity.

If the reader of the code does not have to understand the macro to be able to more or less predict what it does and what are basic guarantees it provides, it reduces a huge amount of complexity when you try to read and understand.

If writing a macro can be compared to writing an operator in a language then all other language design rules apply. It would be a bad language that constantly surprises the user.

Unfortunately, Lisp code tends to be much more abstract with lots of complex, custom operators (macros) as building blocks. If these building blocks are not clear and cannot be relied upon (ie. you don't understand how they work and can't predict what they will do) then this is much more damaging to trying to understand how the codebase works than writing unclear functions.


> sinking complexity from a bunch of code

Sometimes macros provide domain-level constructs. This creates very dense code - which can improve code understanding.

One thing I would recommend to put extra effort into macros, especially with these aspects:

One should write documentation what the macro expects and what it does. This way this has not to be inferred from reading the often quite complex code. Document the implemented syntax.

The macro should check the syntax of the forms. Provide error messages for rejected forms.


With those things in place, macros can absolutely be a net positive for code understanding. And if I saw that being done with any regularity at all, that would be great.

As I get older and more jaded, though, am coming to think that, in an office setting, it is a mistake to choose a language is optimized for maximizing the effectiveness of a thoughtful programmer. It's much more valuable to minimize the damage that can be done by a careless programmer.

One of the most sobering realizations of my career was that the most effective way to become your team's 10X programmer is not to work in a way that lets you be 10X more efficient than your colleagues. It's to work in a way that forces your colleagues to be 10X less efficient than you.


That is exactly what I have realized after decades of work.

There is a cap on how much work you can do individually. There is no cap on how much damage or improvement you can cause to other people you work with.

It does not matter how clever you are with your code if nobody else is going to be able to continue that work or support its results.

For this reasons I have developed following, informal rules I try to follow:

1. It is ok to be clever with the code if I am going to be only one to ever read or use it. This can assumed to be always false for production code. I limit my cleverness to PoCs, adhoc tools and emergencies.

2. Always think how other, and especially junior members of the team are going to develop and support my product. Is it too complex for them to follow? Is there something that can be done so there is less chance they are going to misuse it?

3. I use my cleverness to try to develop simpler and more reliable products. Simplicity is defined by how much work junior member of the team is going to have to expend to understand it. Reliability is defined by how likely it is to fail in face of junior team members operating / developing the piece of code.

4. If team members have trouble understanding or working with my products it is always my fault. Figure out how to make the product simpler or at the very least provide training and ensure they understand how it works.


> you can't quickly tell whether some thing you're looking at is data, a function, or a macro, or what

First of all, there are elements that are unmistakably data: "abc", 123.0E-13, #(1 2 3), :keyword :symbol.

The ambiguity is that a compound expression headed by a symbol could be anything. In order to to be fooled, we just have to read the entire top-level form from the outside in.

Well, that's what language is. For instance, if I'm saying "Bob believes that the Earth is flat", but you only catch my speech starting on at word "the", it looks like I'm saying that the Earth is flat. To get the real meaning (the belief is attributed to Bob), you need the whole sentence.

Or, until you hear the "nai" at the end of a long Japanese sentence, you have no idea that it's going to be negated.


> In a Lisp, though, everything looks more-or-less homogeneous. ... you can't quickly tell whether some thing you're looking at is data, a function, or a macro, or what. It all blends together into a disorienting fog.

I wonder if this is a feature of Lisp, instead of a defect.

When you call up a variable or a function or macro, the intent is to return some kind of result. If variables, functions, and macros all look alike, and then maybe it can be weaved more seamlessly into the code to create its own style.

You can also think of a variable, as a function call with zero parameters.


You can write Perl in any language. Lack of expressive power in the language can also lead to a disaster. Sure, Java maybe doesn't have a fancy typesystem and macros, but then the same kind of clever developers use runtime reflection and bytecode manipulation which tend to end up with just as big disasters if not bigger.


Exactly my experience. Most Java code base end up using all kind of clever tricks like you said, runtime reflection, bytecode manipulation, source code pre-processing/generation, pushing all logic to configuration, abuse of compiler annotations, etc.

These actually create a bigger disaster, because unlike Lisp macros or higher order functions, they have even less structure, and are completely non-standard. I'd rather people use well thought out mechanism for extension and "cleverness" than some poor ad-hoc variant to do the same.


The question isn't whether it is possible to write shitty code in a given language. Every language can be used to write shitty code.

The issue is, given a person that knows the language does not know how to structure their code, what is likely going to be the outcome with regards to maintainability.

My production experience is mostly with C, C++, Python and Java.

As an example, consider typical Java backend application. Even novice developers would typically learn Spring and create applications that contain controllers, views, repositories, services, etc.

They (I mean novice developers) would be constantly repeating rules they don't know why they are following, but heard it is important or seen it as popular. So maybe things like you should have your objects accept injected dependencies.

It does not mean they will know how to use these or even that they will use these productively to create more readable code, but what this does is creates code that has at least some structure.

The code might have huge amount of duplication and redundant or unnecessary constructs, but you will be able to move around, understand what you are looking at (okay, this is controller so I know how this most likely works, this is database layer so I know what I can expect, etc.)

On the other hand Clojure imposes exactly ZERO structure on your application. That is powerful but only if you know how to structure the application yourself, know what kind of choices you need to make and know ins and outs of various options.

If you have no experience creating structure, have been Java dev all your life and have always relied on structure that was given to you and never thought about it until today, you are likely to produce absolutely unmaintainable mess that nobody is going to be able to figure out.


> The issue is, given a person that knows the language does not know how to structure their code, what is likely going to be the outcome with regards to maintainability.

> My production experience is mostly with C, C++, Python and Java

> On the other hand Clojure imposes exactly ZERO structure on your application. That is powerful but only if you know how to structure the application yourself, know what kind of choices you need to make and know ins and outs of various options

All I can say is I have Java, C#, C++, JavaScript professional experience and also Clojure.

From my experience, I've noticed that actually I can navigate better even a messy Clojure codebase, because the language is actually quite simple. Even if someone plasters poorly thought out macros everywhere, the rules of macros and macro-expansion are very simple and clear. I can easily figure out what they do and then start to make sense of the mess. And the REPL allows me to very quickly explore everything that is structured confusingly and hard to read.

This is not true with messy C++, C# or Java from my experience. When Java gets messy, you are now dealing with pre-processors, custom annotations, XML/reflection, magic strings, hidden circular dependencies, and ton of coupling, and those can really get out of hand. There's no simple systematic method to unraveling the mess, like there is in Clojure.

And on the data side of things, this is also true. Clojure data-model being immutable also means it can systematically be unraveled. Again, in Java, C# and C++, you have so many hidden data dependencies that can trip you up, realizing the spread of the shared data across a messy code base is quite difficult, in Clojure, the only challenge is figuring out its flow, but not its sharing.

But, to caveat, all the Clojure services I've worked on were developed while I was lead. So it's possible that having me around managed to prevent getting them in a place that cannot be salvaged. And similarly most Java, C++ or C# projects I've led also never ended up in such a rot for as long as I was around. Where as the Java, C++ and C# projects I've found to be unsalvageable mess I've generally inherited from others or people before me. I haven't yet had to inherit a Clojure project where I wasn't involved with from the start. I do wonder what would happen then. My best experience here is open source, till now, most Clojure open source code base I've explored I've found simple to unravel, but maybe open source has a higher quality bias.

Edit: One last thing that keeps me hopeful is Emacs, probably the oldest most distributed development project ever using a Lisp, and I find it pretty easy to understand its code base and add to it.


> You have to be really incompetent to write a typical Java service in a way where it is no longer possible to develop it at all

I've seen multiple Java services reach this state and require complete rewrite as the only path forward. I think you've just had one kind of experience and are making conclusions out of it, but in your case, I don't think the programming language is a factor. You just happened to see some failed Clojure projects, as I happen to have seen failed Java projects.


Yeah...or where it's possible to develop, but a feature that -should- be all of a week, is a multi-month effort.

In fact that seems like the norm for any Java project that has been ongoing more than a year.


> You have to be really incompetent to write a typical Java service in a way where it is no longer possible to develop it at all,

Disagree. Anything can be made write-only.

> but in Clojure it is just enough to get couple of people that are intelligent enough to write macros but not experienced enough to understand the dangers of lack framework forcing the structure of your application.

Sentence doesn't parse on multiple levels.


Agree with him. Coming from an OO background I always cringed at the 15,000 line classes with 2000 line methods. Side effects everywhere.

In my naivety I thought functional programs with their emphasis on lack of side effects could help. I then encountered a project that was totally functional but written entirely by people with no functional experience. The entire application was unmaintainable even in the most basic parts.

> Doesn't parse

> Macros

Macros modify code structure at runtime so obviously that is fraught with danger.

> Framework forcing structure

Lisp languages in general have a do it yourself attitude not found in others. While there are frameworks it's common to find projects that have invented from the ground up entire web frameworks, db access and orm.

Imagine if every project you encountered in Java reinvented Rx Java, Hibernate, and Spring.


> Macros modify code structure at runtime so obviously that is fraught with danger.

You may have inadvertently misspoke, but macros operate at compile time, at least in the Lisps I have used.


If one uses a Lisp interpreter, macro expansion may happen at runtime.

CLISP example:

  [2]> (defmacro add2 (place) (print 'add2) `(incf ,place 2)) 
  ADD2
  [3]> (let ((a 1)) (dotimes (i 4) (add2 a)))

  ADD2 
  ADD2 
  ADD2 
  ADD2 
  NIL
  [4]> 
As one can see the macro form is expanded four times at runtime.


This is a confusing example, because in a REPL steps of compilation and evaluation are interleaved.

Indeed, can you write a program for CLISP that works like this:

- takes one command line argument (a file name)

- reads in the given file, interprets it as Common Lisp code, expecting it to deliver a definition for the add2 macro

- then runs

   (let ((a 1)) (dotimes (i 4) (add2 a)))


> This is a confusing example, because in a REPL steps of compilation and evaluation are interleaved.

The CLISP REPL does not compile, thus it can't be interleaved.

> then runs

It will still be interpreted and the macro will still be expanded at runtime.


I don't know Lisp well, but I remember PG saying in one of his books or essays, that Lisp is a language in which you can compile and run at read time, and the other two possibilities, too.


Macro's are evaluated inside of 'defun, however, 'defun is evaluated during runtime (when the .lisp file is being loaded into the implementation) so base698 is technically correct.

Excepting that it is not really "fraught with danger" unless you are redefining macro's that have already been expanded and cached in function definitions.


In full Lisp "compile time" is part of application execution.

Now, Clojure is kind of impaired Lisp because it is written for a VM that was not intended to be used this way and so this is not that much pronounced (but you still get REPL, etc.)


Is there anything lacking from the JVM beyond tail-call elimination that Lisp needs? I know Clojure is very slow to start up but my understanding is that this is because it uses the JVM very inefficiently, and they don't seem to care much.


I'm a big fan of Racket, and while I don't write very many macros, my understanding is that in both Common Lisp and Racket, macros are essentially a compiler pass, with no modification at runtime.

https://docs.racket-lang.org/guide/macros.html


Yes, same thing noticed here. Incoherent sentences above, in your parent's comment.


Have you looked at Nubank? Largest neobank in the world by users and valuation, it's a clojure shop to such an extent they literally bought Cognitect

No affiliation, just a satisfied customer


There are examples of successful companies using any programming language. It doesn’t prove anything about the quality of the language. The world still succcesfully runs on COBOL. That is not a good enough reason to pick COBOL for your next project.


The difference here is that Nubank attributes part of their success to the use of Clojure.


I've read a few times that Java and enterprise languages are so horribly tedious for the purpose of slowing down armies of monkeys to avoid catastrophic design.

ps: what kind of clojure projects do you have in mind ?


That's a common meme but there's not much evidence for it.

Java has the syntax it does because it's based on C++. It is verbose because (a) C++ is verbose and (b) it insists on everything being inside a class. The latter is a reasonable choice given that the underlying VM needs some unit of linkage and scope ... sort of like criticising C and C++ for requiring that everything be inside a function, but there are reasons for doing it that way.

It's true that Java's designers are very conservative, and this is partly because there are many users who appreciate that their skills don't get obsoleted all that fast (a lot of whom are in enterprise roles). But that's more of an accident of a commitment to backwards compatibility and historically long release cycles. It's not like they set out to make an 'enterprise language', they didn't. It was originally meant for the embedded TV set top box world.


I really read that big companies, for logistics reasons, prefer verbose, boilerplate-full languages because it slows down mistakes. Not only because the pace of evolution in the language is better for long term careers.


I don't think that's really true. I think it's more the case that the sort of languages that have the features large teams need have tended historically to be verbose, and some people conflated the two together. For instance lambdas made Java less verbose and there was no pushback from enterprises, which you'd have expected if they actually loved verbosity. Kotlin is also doing great in big companies and conciseness is one of its selling points.


> This underlines the fact that Lisp projects can very easily end in spectacular disasters.

I've been following HN since 2014 and haven't seen any submission about such a thing.

The closest was some article or comment from someone who worked at Cycorp, about some horrors of the legacy code base.

Here it is: https://news.ycombinator.com/item?id=21783828

"... the biggest mess I have ever seen by an order of magnitude"

:)


Not surprising since LISP isn’t used much.


Still, for some example companies: https://github.com/azzamsa/awesome-lisp-companies


How do you find REPL integration in Common Lisp compares with Clojure? I find myself blissfully productive in Clojure, and I wonder if it's the same for all Lisps


Common Lisp invented the REPL, Clojure only provides a subset of it.

Have a look at, "The Interlisp Programming Environment",

https://www.computer.org/csdl/magazine/co/1981/04/01667317/1...

Or online demos like https://youtu.be/OBfB2MJw3qg for Symbolics.

Ultimately you can try the community versions of Allegro and LispWorks.

https://franz.com/enterprise_development_tools.lhtml

http://www.lispworks.com/


Yeah the REPL in your typical Common Lisp is much more integrated than nrepl/cider in Clojure. Things like restarts and the debugger are built with the REPL in mind.


> This underlines the fact that Lisp projects can very easily end in spectacular disasters.

This is not specific to Lisp, it happens to all dynamically typed languages.

And this is why "holding invariants in my head" doesn't scale. Your objects have a type. Why not put it in the code and ask the compiler to verify it, so that future developers on that code (including yourself) will have an easier time reading it and evolving it?


Common Lisp allows to annotate code with type information, which can be checked at runtime. Also it's object system works on classes and thus generic function have arguments for those classes.

Something like Common Lisp is a dynamically typed language, but where implementations can use knowledge about types, either at runtime or even sometimes at compile time.

SBCL

  * (defvar *foo*)       ; a global variable *FOO*
  *FOO*
We tell SBCL that the type of the variable is an integer between 0 and 10.

  * (declaim (type (integer 0 10) *foo*))
  (*FOO*)
Now we try to set it to 30:

  * (setf *foo* 30)
SBCL detects the problem:

; in: SETF FOO ; (SETF FOO 30) ; --> SETQ ; ==> ; (THE (MOD 11) 30) ; ; caught WARNING: ; Constant 30 conflicts with its asserted type (MOD 11). ; See also: ; The SBCL Manual, Node "Handling of Types"


That's amazing. I want that.

I like how clear it is about what it's for, which is the sort of thing one does all the time in other languages that lack a vocabulary for it. I'm going to assume you could put any predicate you wanted in there?


An excellent analogy would be the metric system vs imperial measurement system, at least in the USA.

A productive and profitable country of feet and pounds and inches, but significant progress is usually made in labs full of meters and liters.

One could say the constant translation problems between the systems are something like an intelligence filter which provides its own rewards when not actively translating.


> I have trouble finding enough people, mature enough to be able to even propose working projects in Clojure.

It has nothing to do with maturity and very thing to do with maintenance time and maintenance costs of using languages that do not have a robust pipeline of talent.

Using languages like closure do give you a great amount of job security if you manage to sneak it in though.


> Using languages like closure do give you a great amount of job security if you manage to sneak it in though.

I am not after job security. In fact, I treat "I am not after job security" as one of the important points when presenting to potential employers.

I try to do good job and think for the outcomes for the employer first.

I value trust and I try to make it clear to my boss that they can trust me to always work with best interest for the project and company.

For example, I always make it clear when I make mistakes and then we try to figure out how to solve them.

I think, overall, this brings much better job security than trying to sneak technology to create a project that only I would be able to maintain.


>I have never had a static type checker (regardless of how sophisticated it is) help me prevent anything more than an obvious error (which should be caught in testing anyway).

Obvious in retrospect is not the same as obvious. And such errors happen all the time, the same way without syntax checks typos happen all the time.

And "caught in testing" is 2 extra steps removed from caught immediately by the syntax checker running as you type or save: writing the tests and running the tests (and being thorough/lucky enough that this part of the code is covered in a test).

>But it is a stupid tool. It can only do so much. So, you now end up with artificial rules about how to satisfy this tool. And things that I know (and can justify or even formally prove for my use cases) are perfectly fine to do are suddenly not.

What thing that violates a type check would be "perfectly fine to do"?

Dynamic languages still have types (since you still need to pass around the right object with the right methods or fields to the call site, else it will raise an exception when you use a method/access a field/run a function on the wrong type).

  def square(value):
     return value*value
If value is not a numeric type, square is not going to be happy. And that's the case with most (all?) dynamic code.

What would be a counter-example that something not satisfying to a type checker would be "perfectly fine to do" in a dynamic language?

(I do accept that the static typed language is better to have generic and/or algebraic types, no make lots of invariants easier to express without ceremony).


> What thing that violates a type check would be "perfectly fine to do"?

One good example is where you might treat records or "product types" as maps

Let's say you want to write a function that can capitalize all the string fields in the object passed in. In a dynamic language, you could map over the values and apply capitalization trivially. It would be a one-liner.

In a static language, you'd have a few options, but none are great:

1. You could use "reflection" or some dynamic feature to do this, but lose type-safety

2. You could make the function take in the union of all possible data types in your system, and then write a mapping function for each one. Then you're duplicating the logic N times, or at the least writing a lot of conversion boilerplate.

3. You could write two functions for each data type in your system to convert to a Map<String, Object> or some such and back, which lets you operate on the maps. Again, lots of boilerplate. Maybe you use code-generation for this? That would add complexity.

In reality, you probably wouldn't even try to write such a function in a typed language because of how awkward it is. Maybe you'd just write specific translation functions for the records you know you happen to need. But I think that's what people mean when they say you're restricted by the type-system - because it can't verify those types of operations, you find workarounds when maybe that would have been the easiest way to implement something.


There is no limit on the number of problems easier to solve using dynamically typed languages. For an individual hacker, dynamically typed languages make a lot of sense. I don't forget what types my functions accept as I am writing a program. All static typing can accomplish is slow me down when I'm trying to bang out something. This is why Python (for instance) is loved by data scientists, researchers, and startups. And they're right to love it! Purely anecdotal, but I can accomplish more faster (from a clean slate) with a dynamic language like Python than I can with Haskell, Go, or Rust. The difference is small, but non-negligible.

But static typing has a few key benefits:

* No searching through comments for type constraints.

* Better autocompletion.

* Better compilation.

* Better serialization/deserialization.

I do think static typing pays for itself in the long term. Sometimes, we shouldn't care about that. Engineering means managing tradeoffs: short-term velocity vs long-term performance and maintainability?


The auto-completion is purely Python's dynamic-dispatch problem though. Lisp has symbol-based auto-completion which doesn't suffer from the same issue (search is a different story!).

This sounds terribly controversial, but I would love to know /when/ the static-typing pays off compared to a looser approach like sound-gradual typing (where you basically export types at a boundary and make sure that you don't violate those).

I do wonder whether a better approach is to prove your code in some other language (say Z3, Agda, Coq) - then implement it in something else, making sure you can prove isomorphism. This has quite a few benefits; you're not tied to a constructive proof on the type-level ala Haskell, Rust, Go etc... and you can automate a large part of trivial proofs.


The fascinating thing about Python is the ability to write functions where the parameter can intake multiple data types.

I haven’t quite seen another language handle it this elegantly as Python does.

Then, inside the function, you can check the parameter’s data type, and handle it appropriately. And error out immediately if the wrong type is passed in.

This allows you to operate on the data at a more logical level (what it implies), as opposed to just at the technical level (what data type it is).

For example, you can pass in a single string, or a list of strings. And your function can check the type and handle it appropriately.

In C++ and Java, because of the rigidity of the function declaration, in order to do something like this, then you would need to create multiple functions to handle each of the different types that you want to allow. Then you’d need to create a wrapper function to dispatch it to its appropriate sub function. That becomes a complete mess after a while.


C# has an is operator as well. I found it useful with generics.


> I would love to know /when/ the static-typing pays off compared to a looser approach like sound-gradual typing (where you basically export types at a boundary and make sure that you don't violate those).

When you are in a codebase where you don't know which modules have been soundly typed, and which ones are still in the 'gradual' phase.


> There is no limit on the number of problems easier to solve using dynamically typed languages

40+ years of programming experience here + a language geek. I have for years programmed in both dynamic and static languages to find out what works best for me. My conclusion is that problems are easier to solve with statically typed languages. So clearly different languages works for different personalities.


Clojure gives you tools like spec https://clojure.org/guides/spec or Mali https://github.com/metosin/malli for schemas on data. Beats static typing IMHO.


>There is no limit on the number of problems easier to solve using dynamically typed languages.

So, some examples? Also examples where "easier" is not just "you don't need to write types". That goes without saying...


One more that people easily forget. An IDE or LSP can detect much more errors as you type, saving quite a lot of context switch of mind.


how about using something like clojure.spec where you need it most to help make that long term payment?


It's an interesting take. e.g. provide specs/types across module boundaries only. Gradual typing is certainly becoming popular. Meanwhile, statically typed languages are adopting type inference. Some people only write the types for the exports in their modules. The two worlds are coming closer. More statically typed languages are adding dynamic type information (like Go). More dynamically typed languages are supporting static type markup (like Python).


Coming back to even your own ~+1KLOC project after 3 months of working on other things, compiler enforced types can help you jump right in


I reject the claim that it is reasonable to capitalize all the string fields of arbitrary objects.

But it can still be done, like you mention, by using reflection. Now, you claim that it is unsafe. This is untrue, it depends how the reflection works. Even if it were unsafe, it would be no worse than in Python, where you get a TypeError or in JavaScript where you get undefined, which probably leads to a crash later.

For an example of reflection which is safer than just a string-dictionary like JavaScript or Python, see GHC.Generics. It works by compile-time generating a type safe structure which is not just a dictionary. The function you want would have a signature involving this constraint, "Generic x => x -> x", which means it works for any "object" implementing the Generic interface.

Now you may say "but I asked for a function that handles all objects", but I don't see the point. In statically typed languages you get an additional option of using generics when you want. It doesn't make sense to me to require all types to implement reflection when you don't actually need it everywhere.


>1. You could use "reflection" or some dynamic feature to do this, but lose type-safety

By using reflection you only lose type-safety for this particular function. With a dynamic language, you lose it everywhere.

And it's still an one-liner or close in a language without much ceremony (e.g. not Java 1.5).

Example implementation (pseudo-code):

  fun capitalize (o: object) -> [f.toUpper() for f in 
  reflection.fields(o) if reflection.type(f) == "string"]


That's not a very compelling example - in most statically typed languages the object field names are not actually stored in member, so it makes no sense to want to do runtime operations on them, because they don't exist by the time the code is compiled.

That's why you find it hard to do in a compiled statically typed language - because it's not something that really makes any sense.


> Let's say you want to write a function that can capitalize all the string fields in the object passed in. In a dynamic language, you could map over the values and apply capitalization trivially. It would be a one-liner.

That's fair, but can you explain the context here? I.e. why are we trying to capitalize all the field names of a record object? This may be an 'XY' problem--you may be trying to accomplish some final goal and 'capitalize all the field names of a map' seems like an obvious intermediate step to you, while to a statically typed language programmer they may take a very different approach.


It's pretty easy to do in Typescript and loses no type safety (without needing to assert or cast anything)

    type T = { [k: string]: number };
    
    const t: T = {
      one: 1,
      two: 2,
      three: 3,
    };
    
    const makeUpperCaseKeys = (v: T): T => {
      const keys = Object.keys(v);
      return keys.reduce((p, c) => {
        const key = c.charAt(0).toUpperCase() + c.slice(1);
        return { ...p, [key]: v[c] };
      }, {});
    };
    
    console.log(makeUpperCaseKeys(t));
    // {
    //   One: 1,
    //   Two: 2,
    //   Three: 3
    // }


That's great! Are you making the argument that I won't be able to find an example where doing some transform in a type-safe way is awkward in TypeScript, or just that this one example can be done in TypeScript?

By the way, my example was meant to be more like User -> User where there are some string values and some other types of value. You want to do the transformation in a generic way, but still have the type-safe object come out the other end with all the expected keys.


You can try this with typescript 4.1 :

interface User { name: string; age: number; }

type CapitalizeProperties<T> = { [Property in keyof T as Capitalize<string & Property>]: T[Property]; };

type CUser = CapitalizeProperties<User>;

var c : CUser = { Age: 17, Name: "john" }

You can check that the autocompletion works on the c variable.

https://www.typescriptlang.org/play?ssl=1&ssc=1&pln=15&pc=2#...


> In reality, you probably wouldn't even try to write such a function in a typed language because of how awkward it is.

In reality, you probably wouldn't because of how contrived and completely removed from any conceivably realistic practical goal this example is. In what situation would I actually need to solve this problem for every type in my program?


As a proponent of static typing (at least for larger programs): thank you, that was a very enlightening example for when dynamic typing can be superior to static typing.


> If value is not a numeric type, square is not going to be happy.

That is not necessarily true. In Common Lisp, with its generic-function based object system, you can extend the operation of any function at any time to cover new types. So, for example, one could define a method for SQUARE that operates on matrices.

> What thing that violates a type check would be "perfectly fine to do"?

(define (self-apply fn) (fn fn))


Most implementations of Common Lisp prevent shadowing builtin symbols. You get compile/runtime error like "lock on COMMON-LISP package violated". So, unless author went to extra lengths to work around it, you can rely on the standard stuff like * when you read the code.

Even for user-defined functions, it's not an usual practice to extend operation of any function (although again, it's kind of possible). There are CLOS generic functions for that.


> Most implementations of Common Lisp prevent shadowing builtin symbols.

So? Make your own package.


What's your point? In CL it is trivial to determine both at compile and at runtime the package of all symbols involved. Same for generic methods, you can query the runtime which methods it is going to use for some concrete combination of parameters. Much easier than to determine how which overload of a C++ operator does the compiler/linker deem to fit in given context.


My point is that this:

> Most implementations of Common Lisp prevent shadowing builtin symbols.

is irrelevant. It's actually irrelevant for two reasons. First, the example under discussion was SQUARE which is not a CL builtin, and second, even if it was, you could just do this:

Just make your own package and do:

(in-package :my-package)

(defmethod square ((n number)) (* n n))

Now you can extend SQUARE to work on things other than numbers. Or, to pick a more realistic example:

(defmethod sqrt ((n number)) (cl:sqrt n))


Oh that's the misunderstanding, I was talking about overloading of the '*' operator and the consequent confusion, not overloading of 'square'.


* is not an operator in Lisp. It's a function. Lisp doesn't have operators.

To be more precise, cl:* is a function. But you can make a symbol named "*" in another package and define it however you like.


> (define (self-apply fn) (fn fn))

Out of curiousity, what would be a practical application of this?



That didn't answer my question. Let me rephrase it: When would you use

    (define (self-apply fn) (fn fn))
in an actual, real-life situation (outside of teaching lambda calculus)?


Personally, I consider teaching the lambda calculus to be a real-life situation. Teaching the lambda calculus is no less real to me than any other part of my life. But OK, self-apply is not something you're likely to see in production code, and neither are Church numerals, which is where this problem shows up for real. But MAP, REDUCE, and APPLY are, and they all have the same problem as self-apply: their static types are infinite.

This is a general problem with all static type systems. There are only two possibilities: either your type system is Turing complete or it is not. If it is not, then there are things that one might reasonably want to express that cannot be expressed, and if it is, then it is undecidable. Static typing is either constraining, or it is a Turing Tarpit (https://en.wikipedia.org/wiki/Turing_tarpit).


> But MAP, REDUCE, and APPLY are, and they all have the same problem as self-apply: their static types are infinite.

Huh? Any semi-decent static type system has no problems modeling higher-order functions, and static types can easily be infinite via parametric polymorphism ("generics").

> Static typing is either constraining, or it is a Turing Tarpit

You're letting the perfect be the enemy of the good. Static type systems can be beneficial even when they don't solve every conceivable problem.


> Static type systems can be beneficial

I don't deny that. The problem is not with the idea, it's with the (usual) implementation, which is that if you don't appease the type-checker it won't let you run your program. I'm perfectly fine with a type checker that only gives me warnings but still allows me to run the code even if it isn't satisfied.


Without additional arguments? Never. With additional arguments, it provides the function with access to itself.

This could be used "under the hood" in an implementation to bootstrap the ability of named local functions to have themselves in scope, which is useful for writing ordinary recursion.

Compilers and interpreters can achieve such a thing without any such jig like a Y combinator, because interpreters and compilers work with environments that are re-ified as objects. They are free to manipulate environments and evaluate/compile any piece of code in any environment they see fit.


> What thing that violates a type check would be "perfectly fine to do"?

Here is an example limitation of typescript's type system that I routinely run into while developing real code [1].

I look at it like this. If you consider all possible programs, some are invalid, some are valid, and some are valid and useful. A type system's job is to reject as many invalid programs as possible while accepting as many valid programs as possible and trying to optimize for useful valid programs. Due to the halting problem this is impossible to do perfectly, so any given type system will likely accept some invalid programs and reject some useful valid programs. If the type system happens to reject your useful valid program, you'll likely have a bad day :)

[1] https://www.typescriptlang.org/play?#code/PTAEHUFNQCwQwG7TgO...


Sometimes the type system is forcing you to think in terms of what it is that you are passing around. In this case, getEmails() isn't expecting a group of students or a group of faculty, but rather a group of people that can be emailed. You can introduce an interface of that type and have it inherited by both Student and Faculty and use that in the method for clarity and type-safety without over-relying on union types: https://www.typescriptlang.org/play?ssl=1&ssc=1&pln=38&pc=30...


I appreciate the effort you’ve both put in to concrete examples. I think yours gets to a point that I haven’t often seen stated. You’ve named your interface Person, but perhaps even Emailable could serve the purpose. The problem is that you had to name it. Naming well is hard, and I believe strong type systems often create a need for more names. It’s a cost I don’t often see considered in the tradeoff.


Instead of giving the interface an explicit name, I think you could instead use something like this:

    function getEmails(group: Array<{email: string}>) {
        return group.map((p) => p.email)
    }
which leaves the function more open ended than relying on an explicitly named interface that other types then have to inherit from.


> And such errors happen all the time, the same way without syntax checks typos happen all the time.

No they don't, because syntax is richly varied and nested, even in programs whose "type story" is bland.

For instance, in some numerical program, there are lots of opportunities to make typos in the syntax, but the type of just about everything may be either float or else array of float (possiby string, if it has any error-handling code with messages, and bool if there are some logical operations).

If you pass an array of float where a scalar float is expected, such that an error occurs, and if you don't catch this in your testing, it means you're not testing the code.

Untested code is of a dubious status, even if it compiles with a static type system; testing is not negotiable.

Note that functions like sin and cos have exactly the same type signature, yet it is disastrous if you mix them up. Static type checking doesn't help. Static type checking also doesn't help with mixed up variables: calling f(x, y) which should have been f(y, x) or something else, where x and y have the same type.

In programs, chunks of code that are put together into the same module or function often work with multiple instances of the same type. It's more important not to mix up those instances, than to worry about type errors. The code could be wrong in all sorts of ways, yet statically check.

That's where you need to step up the the argument into "True Scotsman's type systems territory": a sufficiently advanced type system can encode all those properties that prevent the mixups that the everyday type system doesn't. Yeah, well, nobody uses that; nobody understands it outside of a narrow slice of academia. Examples of the technique are such that encoding even a trivial property like "list is in sorted order" results in an a considerable increase of program complexity all concentrated in one place, and less easy to understand than a set of test cases against the obvious program. Yet, it doesn't eliminate the need for testing; nothing does. There can now be a bug in the way the desired property was encoded into the program. Perhaps the sorted property was correctly encoded, but it should have been descending order. The test case will catch it.

When you have a language that is available at compile time, you can execute test cases as part of compilation. Test cases are therefore static checks. Anything happening at build time is a static check. Heck, how a git commit message is formatted is a static check, if a repository commit hook validates it.


> Note that functions like sin and cos have exactly the same type signature, yet it is disastrous if you mix them up. Static type checking doesn't help.

I agree with you. I am hard-pressed to imagine types (where it's static analysis or anything at all) offering any help in catching inadvertent swaps of sin and cos.

One thing that types could help with is avoiding confusion between e.g. radians and degrees. Even this I believe is quite a challenge, because the quintessential (in my opinion) example of static analysis and types as they arise in science and engineering, the SI units and dimensional analysis, take radians to be dimensionless and therefore do not distinguish the input and the output types of e.g. sin (Radians).

I personally believe this problem is rectifiable beyond the SI type system (or who knows, within it), but not at no cost. I believe the choice to not distinguish the input and output types of e.g. sin (Radians) is rooted in many different practical considerations, and points to the difficulty of formalizing types in practice.

> Test cases are therefore static checks.

I don't agree with this unless you are talking about tests that perform some kind of symbolic execution / static analysis within them. What static analysis grants you is checking a "for all x" claim, without literally checking each x. You can argue that for any given static analysis system, the kinds of claims it lets you check are not that useful in practice, and I would agree in many situations. But having a finite number of test cases that tests some x is not the same as testing for all x.


Unit tests, maybe not, but it's commonly accepted even in statically-typed communities that property-based tests are practically as good to guarantee properties which a stronger type-system would catch.

For example in Haskell almost all of the abstractions (Monoid, Semgroup) use property-based testing rather than proofs.


An example:

> y = [1] ++ [2] ;; Assume types List[Int], List[Int]

> y[0] ;; This should return Union[Int, None]

> y[0] + 3 ;; This should fail type-checking

You /know/ the list isn't empty. Sure you can argue that /maybe/ we should be using NonEmptyList, but now imagine this code.

> y = [1, 2] ;; NonEmptyList[Int]

> y = (filter (lambda x: x % 2 == 1) y) ;; Has to return List[Int]

> y[0] ;; Union[Int, None]

Again, we hit into type issues.


The type system is a proof system. In your examples, the types do not imply the invariants that you want.

Does this mean that the type system is faulty? No! It just means that you need to e.g. get a more expressive type system, document code properly, use smart constructors. There are many tools.

Nobody said you can trivially prove everything using type systems. So what are your examples showing?

I see your code, but I don't know what you're actually trying to do. One interpretation could be, that you want to prove that for the naturals (excluding 0), up to and including 2, there is an odd number. Can one prove this using type systems? Definitely. If you read Software Foundations, you will know how.


Sure - but the point is that there are /diminishing returns/ on type-systems. The question is then, /where/ should one draw the line when trying to prove certain properties of code; which properties are worthwhile proving?

Are you /seriously/ suggesting that one should use Agda or Coq for most code?

EDIT. The original comment was suggesting as though dynamic and static type-systems are a dichotomy. I was pointing out that it is a sliding scale from nominally-typed ala Go, all the way to dependently-typed languages; and that's not even including model-checkers!


It depends on how sophisticated your type system is. For a pure HM type system, anything requiring rank-n types won't type check, but are theoretically sound in a type system that supports them.


Squaring is usually value*value.

(a minor nit that shouldn't detract from your point - since in a dynamic language, you'd have potential type errors in addition to this sort of thing...)


>Squaring is usually valuevalue.*

This version is optimized for the value of 2! (thanks, checked).


> What thing that violates a type check would be "perfectly fine to do"?

A typical example, in dynamic programming languages, is to treat numbers as strings and vice-versa; evaluate lists in boolean context; and so on.

> If value is not a numeric type, square is not going to be happy. And that's the case with most (all?) dynamic code.

Some programming languages will be glad to accept a string and treat it as a number -- see Duck Typing (https://en.wikipedia.org/wiki/Duck_typing).


> A typical example, in dynamic programming languages, is to treat numbers as strings and vice-versa

That sounds great until it bites you in the ass really hard. In general, you want to catch unintended behavior as early as possible.


You want to, but they don’t. Those people just want to get stuff done, as opposed to a language like Haskell or stricter where you’ll never be able to get old code to run because newer compilers print errors that take a week to understand. There’s some old Yegge articles about this…

PHP converting between strings and numbers is usually a mistake, but the one data structure doing everything (array and dict) is nice enough.


I don't think that is necessarily a dynamic language thing: that seems like your regular ad hoc polymorphism. Pascal allows it, IIIRC. So does C#. Being able to use a function on different kinds of arguments is orthogonal to the typing discussion.

Scheme, for example, is dynamic yet very monomorphic. You have length (working only on lists), string-length, vector-length, bytevector-length etc.


that sounds really awful, not "fine."


Can you give an example of where doing this is useful?


> What improves (and guarantees) software quality is rigorous testing. To deliver high quality software, there is no other solution.

This is 100% false. You can’t inject quality into a bad design via testing. You can’t guarantee quality or correctness through testing. Testing is the second worst place to find errors.

You get a MUCH higher roi by producing thoughtful written designs and getting them peer reviewed. And again with code reviews. (Peer review is best but self review can also be very effective.) I’m not saying don’t test — testing is valuable, especially because it forces you to design for testability, and a suite of automated tests is valuable when making changes later.


I wonder how much experience you have in the real world. All of us developers had this thought at some point.

I've seen terrible code with plenty of quick hacks on top of that, which was running very stable, because it was battle tested for years in production.

What do you think will happen when you refactor such code to a better design? All juniors would think this is the best way to get a stable codebase.

Reality is that you will introduce bugs by refactoring, not get rid of it. Writing code means writing bugs, no matter how pretty your code is.

This reminds me of some quote that "Junior developers write 1 bug per 10 lines of code, experienced coders know they write 1 bug per 10 lines of code"


You might be one of those experienced developers who has gone through their entire career without ever meeting a well designed, beautifully architectured application, and therefore concluded, sensibly, that all real-world applications look like shit if you look inside, and you seem to have come to believe that this is the only possible way.

I have a different experience. Code that is well designed makes mistakes look obvious and the logic feel natural. It's very hard to write code like that, but I've seen it and I've been able to write it myself, so when I see a bunch of complicated code that only tests can tell whether or not it works, I know that the problem is really that the best design simply has not been found yet. If I find code that looks like shit and I realize there's a better design for it that will make the code clear and natural, I won't think twice to refactor it ruthlessly if such code has been a source of problems and time wasted (and fixing difficult to read code is huge costly to morale as well, which when taken into consideration, increases the cost of technical debt by a lot).


> a well designed, beautifully architectured application

I've seen everything during my career, bad code with lots of bugs, bad code running stable. Nice code full of bugs, and nice stable code.

The reason something was stable was never how the code looked, but how battle tested the product was. You will learn, don't worry.

Plus, at a certain point, it's about tradoffs and compromises. I would love to see highly optimized code that is easy to read.

> only tests can tell whether or not it works

Tests are nice, but the real test will be when users start breaking your app.

I'm curious, what kind of product are you writing?


I've written code in all sorts of scenarios: government, web application back and front ends, auth, simulations, testing software, statical analysis, long-maintained products, throwaway but complex scripts for one-use-only, and multiple popular libraries I maintain.

Difficult-to-read code always causes problems, either because it directly causes bugs due to its logic being obfuscated, or because no one can approach it to make necessary modifications when change is invevitably required.

You claim that "The reason something was stable was never how the code looked, but how battle tested the product was". I am claiming that you're wrong and your counter-argument is that I will learn to be like you one day. This shows that you're the one who has to grow up.

I am constantly looking for ways to write code that works in all scenarios, provably. I don't need to battle test code if I can do that. I only need battle tested code in the absence of that property. I will concede that in many occasions, this is all you can get, but that's due to a lack of time and skills, not an inherent property of a system or the process of writing software.

It's sad to see people who gave up on creating good software (because if you only can create reliable software after it's battle tested, that means it was buggy all along before that - so for the majority of the life of your product, your software was buggy - that is not acceptable).

If you believe I am a dreamer, an unexperienced developer, let me prove you wrong. Show me some code you believe it's impossible to make reliable without "battle-testing" it over a decade, and I will show you how.


you assume a little bit to much about your discussion partners. calling them inexperienced etc...

this takes away the whole strength of your argument plus lets you sound like a douche.

plz fix


The original argument is what I consider one of the biggest beginner mistakes. So the inexperience argument is very much relevant.

I was young and inexperienced once, and I had the same belief. But after bumping my head against the wall, I learned. In the mean time, I saw plenty of people bumping their head against the same wall. They all learned.

So therefore I assume this person is indeed inexperienced. If I'm talking to some senior developer, then it's up to me to reassess my way of thinking, and looking where I'm missing something. But if not, then it's just the phase we all went through.


I agree with him. It's indicative of more idealism than practical sense, which is indicative of inexperience.


> I would love to see highly optimized code that is easy to read.

That that's a trade-off is [just] a deficiency in our programming languages and tooling around them. As we evolve are young field (and if you've had a long career, then it is even younger to you!) I trust we will find more and more highly-optimized code that is easy to read.


Computer languages are a means of communication between a human and a computer. They clearly evolved closer to human readability and understanding, and away from computer language itself.

When you optimize, your code needs to be closer to computer language again, and therefore further away from human readability.

The only way to have both, is to have a system that can optimize the code automatically. We are getting closer to that, but still sometimes this is not enough.


As a 20+ year vet, what worries me is when someone starts talking in terms of good or bad.

It's a spectrum, not a binary. There is code that's just egregiously bad. There's code that's just egregiously good (and it's not clear which is more expensive to produce and maintain). But the vast majority falls into the grey area in between those extremes.

The other poster (koonsolo) has just seen enough different codebases to have come to the conclusion that it's not nearly the most important aspect of a projects success. And I agree with them on that point (wholeheartedly in fact).

Developers concentrate on it because it's what they're in all day, the same way that a DBA is going to concentrate on the data model because that's where they're at all day.

But in the grand scheme of things, developers way overblow the value of great code. I also have reservations about the claim that most developers even know what great code _LOOKS_ like, I've seen too many terrible codebases that were claimed to be well designed by their creators.

My point is that if the code isn't actively causing you problems then leave it be, and stop calling it good or bad. Call it what it is, working code that doesn't need to be bikeshedded over.


> I've seen terrible code with plenty of quick hacks on top of that, which was running very stable, because it was battle tested for years in production.

"Battle tested" is not what is meant by testing here, though. And I've seen software that was "battle tested" and rock-solid despite barely being _actively_ tested at all. Mind you, it was at least part of the economics of the software itself - "battle" (i.e. "in production") testing was perfectly viable in its case.


this is pure argument from authority.

I value testing, but primarily because (in years of experience, etc.) it makes code easier to refactor or reuse, not easier to make _correct_. Actually, this also applies to static types, which I also value.

But suitability-to-purpose (the best proxy for correctness that I've yet found) is far more determined by peer-reviewed design than by tests.


I agree that testing is not a panacea, but I also don't think it is completely divorced from design.

I see testing and design as cooperative processes, I agree that design is almost always the highest yielding process in any non-trivial program, but I think of testing almost as a design phase that tries to invert the design to look for weaknesses.


I suspect what he’s referring to is the fact that so many people think that, because they have 100% code coverage in their tests, they have quality software. I’ve seen so many times a team claim that with pride when it’s brutally obvious to anyone who looks that the product is broken. They just don’t seem to be able to reconcile that problem when asked about it.

What you test matters every bit as much as how much you test. Tests require thought and design as well. Too many developers don’t seem to understand that.


That's part of what I was talking about, but the original statement in the article was that rigorous testing guarantees software quality, and it also says "there is no other solution", and that is demonstrably false.

One example: A telecom product that processed phone calls. I stepped into a role on the team building the call progression handling system, which was based on an ad hoc state machine and some inter-task communication that included a combination of mutexes and message queues. It was a mess. Our testers succeeded in churning out a lot of bug reports. Not a single one of those bug reports did anything to improve the quality of the system -- even if we're not being pedantic and saying that only the patches in response to the bug reports were what might have improved the quality. Because none of the patching we could do really made any net increase in quality. The only thing that improved system quality was stepping back and re-examining the design. We changed the existing undocumented design to get rid of deadlocks by just using one inter-task mechanism (just queues, no mutexes) and by formalizing the state machines so that none of them got stranded in an unexpected state. Once the design was formalized this way we found -- by inspection, not testing -- a number of fundamental bugs in the states and transitions. After fixing those, more system testing showed some more bugs, but we were able to assess these in the context of the design and fix them at the root cause rather than just hacking on patches. The design changes also enabled automated integration testing that wasn't possible with the old ball of mud. This helped save time in allowing the [mostly manual] system testers to focus on certain areas of the system.

I've worked in other situations where lots of testing just resulted in lots of bug reports, which ended up mostly getting deferred and thus did nothing whatsoever to improve product quality.

And I've worked on software teams that were in fact producing rather high quality software, but we were building the wrong thing. It's really hard to fix bad requirements through testing.

I don't want to sound like I'm anti-testing, but I think it should be understood that testing isn't the source of quality, it's just the last phase of a process that hopefully has (non-test) mechanisms to ensure a quality product in the upstream process phases.


While I agree with you in general, it should be noted that OP was talking about unit tests, not manual or integration tests. Unit tests are always touted by dynamic typing people as their equivalent of the static typing fast feedback loop.


Right, spending all your effort on the one true design gets you in architect astronaut territory. Similarly for existing systems, conjuring up the grand rewrite in the sky (as Robert Martin calls it, I think?) is equally poor. Instead, recognize that design is important but will constantly evolve. Tests give the ability to incrementally redesign quickly and confidently.


Testing tends to validate design from the point of testability. If it feels like the code is resisting testing, that usually ends up being an indicator of bad design (though not always vice versa).


This is where we are now. We know how to make testable designs, since effort has been put towards it.

What we lost is being able to distinguish good vs bad designs in other key aspects. One indicator I use is the number of test cases that are needed for covering. A poorly separated design will need to test many combinations having not isolated the factors.


I have noticed another indicator of good design is how easy it is to add specific fixes/features without having to think too much about how many places you'll need to make changes. Interesting point about no. of test cases. I tend to end up with several combinations because I prefer to focus on integration tests more for crud-type applications (a test configuration type/class comes handy in such scenarios).


As an example, if testing a 'send' function that takes a sender, content/media type, and recipients where recipients can be a single contact or a group, one shouldn't have to check that all combinations of content/media-type works with single or group recipients and other variable option variables. If decomposed in orthogonal ways K+L+N+M tests and perhaps a handful overall edge cases should do and not KxLxNxM if each variable dimension has K, L, N, M possibilities. If not cleanly separated in the implementation structure, the cross product of test cases are needed.


Exactly. Testability requires modularity and reproducibility.


Wow. I thought I recognized that name - the author of this post interviewed me for a job several years ago when I was first learning how to code. It didn't work out as I was waaaaaaay too junior at the time but I'll always remember that interview fondly.

My background was more in math and my interests were in PL theory, so he quizzed me on the lambda calculus, and turned me onto Essentials of Programming Languages by Friedman & Wand (and off the bad compiler books I had been reading previously), and pointed me towards material for further study in programming languages and lambda calculus.

The interview was still fairly practically oriented despite those detours, and I learned a great deal that helped me to quickly find a solid technical role not long after. But more importantly, he and his colleagues who interviewed me were incredibly kind and respectful, and instilled this sense that both computer science and software engineering could be a lot of fun. :)


You should apply again, the company the author works for is looking for people “Proficient in Java and C++ with strong object-oriented design skills”.


Lisp is too complicated.

For years I used to say, "I'm keeping a bank of brain cells free for learning Lisp one day." I knew it was important, I knew one day I would sit down and learn Lisp. Eventually, that day came. As it happened, I chose PG's book to start with, and (as is my wont) I began with the Table of Contents. That's as far as I got. Somehow, in the years leading up to that moment, I had acquired the necessary background information to be able to "get" Lisp just from reading that ToC. My mind was blown. Here was the perfect, the complete, the only needful language. And then I got mad. I got mad at you and me and all our peers. For wasting time. For wasting attention. For fucking around with syntax and bullshit for fwcking decades when we had Lisp. All the time and energy and money, burned like so much trash, because not Lisp.

So did I start using Lisp? No, of course not. Python pays (paid) the bills. Sick world.

- - - -

Anyway, in the meantime, I discovered a language called Joy. I'm pretty sure it's the simplest useful language. It is a point-free form that's good for Categorical Programming. It's not based on Lambda Calculus so there are no variables, no binding environments. You can do algebra on it and it's easier than Squiggol ( https://en.wikipedia.org/wiki/Bird%E2%80%93Meertens_formalis... ).

I was messing around with Make-a-Lisp the other day but I didn't get very far because, as I say, Lisp is too complicated (not to understand, to implement. It's a PITA.) Joy is simpler. The syntax is trivial, the grammar is trivial, the interpreter is simple, it's just a beautiful elegant formal system.

To be clear, I'm not complaining about Lisp, I think Lisp is the best language (Except for Joy, which is even better, IMO.)


> Somehow, in the years leading up to that moment, I had acquired the necessary background information to be able to "get" Lisp just from reading that ToC.

Damn, you are a genius! As for myself, I can only gain this level of understanding by skimming the index (or maybe the appendices). ;)


Lisp development is niche, but has sufficient use so that languages like Common Lisp, Clojure, and Racket will hopefully be supported and enjoyed for a long time.

The author hits on the sore point of available libraries to hit APIs and I would add access to deep learning infrastructure. I use the Hy language (hylang, has a Clojure style syntax) that is built on Python (you can get a copy of my Hy book for free, the minimum price is free https://leanpub.com/hy-lisp-python), but I still like Common Lisp as my main driver.

Anyway, I consider myself to be very fortunate to have about half of my career since 1982 use Lisp languages.


The author of the article, Anurag Mendhekar, is one of the authors in the original paper on AspectJ [1], which introduced Aspect-Oriented Programming. The origins of AOP are in Smalltalk and the Meta-Object Protocol, I think.

I really liked Aspect-Oriented Programming, despite its great issues with mutually interfering aspects etc. I wonder if it will make it as a big paradigm, but I hope it does.

[1] https://www.cs.ubc.ca/~gregor/papers/kiczales-ECOOP1997-AOP....


I have used aspectj in somewhat unhealthy dose in my project. Mostly for reasons not even intended as an use to aspect. We have to work on giant configs reified as giant java objects. And I have was able to collect enough metadata about the code using aspects and keep them tied to the live objects in such a way that I was able to simulate creation, transformation and move of method calls.

yes, a byte code parser and reflection might have achieved the same, but the library made it breezy. Maybe, java is not a hacker's language but ecosystem comes with hackers toolbox.


> not a hacker's language but ecosystem comes with hackers toolbox

I have always loved calling Java the mullet of programming languages for this reason -- a shallow investigation reveals a GC'd OO language meant for getting down to the business of some boring insurance or finance apps. Around the back, it's got enough dynamic features for a full party.


> You’re always thinking about how information is acted upon, how it is transformed, and how it is produced. I have yet to find a foundational framework that captures this inherent intentionality (the ‘how’) that is better than the λ-calculus.

Yes! Lambda calculus is my "native brain programming language" and I can't find a language the expresses it better than the Lisps. I never have to bend around the language, it feels as if I can always do exactly the method I wanted to solve something.

Another big benefit is the simple syntax makes complex metaprogramming achievable. You just move the s-expressions and atoms around. Unnest and nest them. Like normal code. It makes sense.


> I have never had a static type checker (regardless of how sophisticated it is) help me prevent anything more than an obvious error (which should be caught in testing anyway).

This is the static vs dynamic type debate in a nutshell. Personally I think Typescript hits a sweet spot of static typing by default with an escape hatch (via the `any` type) when you need it. I think more self-documenting code makes the tradeoff well worth it (with the added guarantee that the documentation is correct). And most importantly, I like getting feedback from the editor as I'm typing rather than having to write code then run tests, then write some more and perhaps modify the tests, etc.


In that debate, we quite often hear something along the lines of "I can't remember the last time a static typechecker actually helped me" and that makes me imagine a fish saying they can't remember the last time that water actually helped them. Maybe the ambient help is so constant that it no longer even registers in long-term memory.


On the other hand, I've worked with Erlang systems where almost all issues we saw in production would have been prevented by a type-checker.

There are, however, very few ergonomic type systems. At the moment I think Rust is, sadly, the only one.


What makes it ergonomic in your opinion?


>Personally I think Typescript hits a sweet spot of static typing by default with an escape hatch (via the `any` type) when you need it.

That has been available in Obj-C, C# and others...


Even better, use `unknown` when at all possible instead of `any` - it's like `any` but you need to assert/inspect it to unwrap anything from it.


It's like saying you don't need a spell checker becuase you never make mistakes. Pure hubris.


No, it's like saying "I can't remember the last time I needed to turn my document into a string literal embedded into the word processor's source code, so that a spelling error could be caught when the word processor is recompiled. I test my document using an easy-to-use run-time spell checking feature."


This is not an apt analogy at all. You don't recompile your editor to get type errors. The question is type checking vs unit tests. If your text editor can type check as you type, the feedback loop is much tighter than recompiling and rerunning and possibly modifying handwritten unit tests.


> The best any static type checking will let you do is “array[float]”

I know this guy is smart and all, and I love Lisp as much as the next guy, but this article really reads like it was written by someone who hasn't used Haskell.


> Now wait a minute, I’m sure you’re thinking. I’ve never proved sh*t about my functions. I’m betting that you have. And that you do all the time. You’re always convincing yourself that your function is doing the right thing. Yours may not be a formal proof (which may be what leads to some bugs), but reasoning about code is something that software developers do all the time. They’re playing the code back in their head to see how it behaves.

I sometimes try to point this out when arguing in favor of Formal Methods. You're already using informal Formal Methods anyway. Just write your reasoning down and make the computer double-check it.

"Oh the expense!" I hear them cry. But where's the spreadsheet comparing that with the expense of the faulty reasoning? Where is the Manhattan Project to lower the cost of logic?

And don't get me started on (what Guy Steele Jr. calls) Computer Science Meta-notation, which has no official standard, and isn't a runnable language. ( https://groups.csail.mit.edu/mac/users/gjs/6.945/readings/St... )


Everybody says Lisp is dynamic. OK but it's not Python. Don't forget that CL, and in particular the SBCL implementation, does pretty good compile-time type checking. Nice to note is that we get the warnings instantly because during development we can compile the function we are working on with a keystroke. Sure it is not a Hindley-Mindler type system (there's one in development: https://github.com/stylewarning/coalton/) but it is "good enough".

BTW, Atom and VSCode have pretty good plugins this day (https://lispcookbook.github.io/cl-cookbook/editor-support.ht...) and CL might have more libraries than you think (https://github.com/CodyReichert/awesome-cl). A lot is going one.


Hi all,

I'm preparing release 250 of the TXR Language, the bulk of which is a Lisp dialect called TXR Lisp.

Release 250 adds compiler optimizations, like jump threading, some dead code elimination, and a few peephole reductions.

This work is finally possible because I had put it on hold due to not wanting to write such code without a pattern matcher, which we now have.

I'm fixing two bugs that were reported by a user, which will be nice in such a landmark release (250 releases, wow!).

The new structural pattern matching sub-language in TXR Lisp (new since 247 or 248) will be improved. Bugs are fixed. The way the @(or ...) pattern operator works has been rewritten, so certain corner test cases now pass. It generates better code.

There are will be some new features in the matcher too. The @[fun ...] flavor of the predicate operator now lets you capture not only the variable, as usual, but using an extra argument, the value of the predicate (which could be an extended Boolean with an interesting value).

For instance, is "abc" in the hash table htab? If so capture it as x, and also capture the value as y:

  This is the TXR Lisp interactive listener of TXR 249.
  Quit with :quit or Ctrl-D on an empty line. Ctrl-X ? for cheatsheet.
  Upgrade to TXR Pro for a one-time fee of learning Lisp!
  1> (let ((htab #H(() ("abc" "foo") ("xyz" "bar"))))
        (when-match @[htab x y] "abc" (list x y)))
  ("abc" "foo")
Here, htab is just being used as a function; any function will work that can take the argument "abc"; if it returns non-nil, then the predicate has rung true, the pattern has matched, and x takes "abc", and y the returned value.

By the way, to see quips like "Upgrade to TXR Pro for a one-time fee of learning Lisp!" you must add this to your ~/.txr_profile:

  (put-line (quip))


Lots of articles like this out there about Lisp. To quote Linus Thorvalds:

> Talk is cheap, show me the code

If you think you can write better code in Lisp, well, show us some examples of that.


OK, here's an example: https://cs.brown.edu/~sk/Publications/Papers/Published/fkt-t...

In this paper the authors describe how students actually grasped fundamental concepts better in Schema than in Java, despite being taught both throughout the course of the semester.


If they wrote java with closures people would understand it as well. Instead they used more complicated examples for java which skews the results. Example if you translate their lisp example directly.

Their original lisp:

    ( let ([ x 3])
      ( let ([ f ( lambda (y ) (+ x y ))])
        ( let ([ x 5])
          (f 10))))
Java:

   { int x = 3;
     { Function<Integer,Integer> f = y -> y + x;
       { int x = 5;
          { return f.apply(10); }}}}
They argued that Java is less concise so they couldn't have as many examples, but I feel that example gets the point across just as well and it has comparable number of characters.


Any examples of successful large scale production code in LISP? There are many examples of those in Java/C++/...


Walmart's transaction processing system in Clojure. Every single sale in at least North America goes through that system.


> For example, your invariant might be something like I’m expecting here a monotonically increasing array of numbers with a mean value of such and such and a standard deviation of such and such. The best any static type checking will let you do is “array[float]”.

Isn't this just plainly false? With dependent type system wouldn't you be able to encode all such invariants into the types? Curry-Howard isomorphism and so on?


I think so. Using existential types, we can encode predicates in such a way that the type is inhabited (i.e. there is a value of that type) only if the predicate is true.

I am learning Forth and Idris in my spare time these days. I have tried writing merge of two sorted lists in Idris, with the constraint that the merged list must have the length equal to the sum of the lengths of the input lists etc. - frankly, I much prefer Forth :D

https://www.idris-lang.org/


This is possible to do without dependent types. You could have something like this (using TS syntax):

  function produceComplexArray(arr: array<number>): ComplexArray | null {
    ...
  }

  function consumeComplexArray(arr: ComplexArray) {
    ...
  }
If the only place where you can produce a ComplexArray is produceComplexArray, and produceComplexArray ensures that the arrays it receives are "a monotonically increasing array of numbers with a mean value of such and such and a standard deviation of such and such", then all the functions after produceComplexArray can be assured that they have the right array. Therefore, even with a relatively simple type system, you can ensure this invariant.

I think I got this idea from this article: https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...


I suspect the problem is the types begin to get fairly heavy weight. It can also start requiring you to make many atomic operations so that your invariant holds across larger chunks of code. And, that often isn't easy or really necessary.


> This feature, called tail call optimization (or TCO), has been around for a few decades and it is a sad commentary on the state of our programming languages that none of the modern languages support it.

TCO is part of ECMAScript specification, although apparently only Safari supports it.[0]

[0] https://2ality.com/2015/06/tail-call-optimization.html


Which, considering JavaScript is a language many compile to, sucks. I have said it many times: recursion is unbiased. Compiling a languages looping facility to something recursive is simple. Forcing one looping facility into the shape of another looping facility is the path to suffering.


  In other words, static typing is pointless. It has, maybe,  
  some documentary value, but it does not substitute
  documentation on other invariants. For example, your 
  invariant might be something like I’m expecting here a 
  monotonically increasing array of numbers with a mean 
  value of such and such and a standard deviation of such 
  and such. The best any static type checking will let you 
  do is “array[float]”. The rest of your invariant must be 
  expressed in words as a documentation of that function.
I wonder if "design by contract" [1] tooling would help here? It seems to me one could express the stated invariants of the array as pre- or post-conditions on any function processing that array.

[1]: https://wiki.c2.com/?DesignByContract


That is funny, Ada/SPARK is not mentioned at all! How strange.

https://en.wikibooks.org/wiki/Ada_Programming/Contract_Based...

http://www.ada-auth.org/standards/12rat/html/Rat12-2-3.html

https://docs.adacore.com/spark2014-docs/html/ug/en/source/ho...

https://blog.adacore.com/contracts-of-functions-in-spark-201...

Plus, I do not believe that static typing is pointless. Even if it were ONLY for documentation, that would already make it pretty useful, in my opinion, like Erlang's type specifications, although there is a static analysis tool called dialyzer that identifies software discrepancies such as type errors and such.

In any case, from AdaCore's website:

> In statically typed languages, a type is mainly (but not only) a compile time construct. It is a construct to enforce invariants about the behavior of a program. Invariants are unchangeable properties that hold for all variables of a given type. Enforcing them ensures, for example, that variables of a data type never have invalid values.

> A type is used to reason about the objects a program manipulates (an object is a variable or a constant). The aim is to classify objects by what you can accomplish with them (i.e., the operations that are permitted), and this way you can reason about the correctness of the objects' values.

Just for the curious:

> A nice feature of Ada is that you can define your own integer types, based on the requirements of your program (i.e., the range of values that makes sense). In fact, the definitional mechanism that Ada provides forms the semantic basis for the predefined integer types. There is no "magical" built-in type in that regard, which is unlike most languages, and arguably very elegant.


And then if you don't test that Ada code, you blow up your Ariane 5.


https://www.researchgate.net/publication/220475937_Design_by...

For the record: that was ages ago, the language has improved a lot since then. Ada 2012 includes features for contracts, for example. Read the "Preface" of "Programming in Ada 2012" for details. :)


The Ariane 5 failure was due to the faster trajectory of that launcher causing an integer overflow that the Ariane 4 (which the software was cribbed from) did not experience. I'm not clear how you encode that in a contract.

The sad thing about this is that if that code had been in (say) C the launch would have been fine, since no integer overflow would have been trapped, shutting down the guidance computer.


Great point.in addition to the excellent sibling comment on Ada, I would add that I have personally benefited (in the field of scientific computing) from contract programming facilities available in D

https://dlang.org/spec/contracts.html


Thank you for mentioning that and reminding me about two of my favorite projects, Eiffel and PyContract. (Is part of why DbC is so useful is because it's an extremely concise way to write assertions?)

I'd love static typing systems if they allowed for DbC-style type declarations. There are times when the thing I need to use isn't a generic int, or even a short int, but an integer three or more. That's also type information, just not type information based on the shape of the data in memory. I've never seen a type system that can make those determinations at compile time, even when working with literal numbers in source code.


Dependent type-systems like Agda can do that. Unfortunately they are also a pain to write.

Fun-fact, you can use church-encoding to transform from your generic int to a "shape" based encoding for the slowest operations ever in quite a few languages (Python included).


Ada has that. Bounded types and subtyping.

For example, the type system is aware of the ranges of allowed values and can statically enforce this. Runtime checking is used by default but can be disabled.


You can express those invariants using dependent types. Look at something like Lean.


> next cool feature being dropped into Python or Scala, or whatever their flavor of the month is

I might be off-base here, but I feel like Python is at leasta little mature enough to not really justify calling it "flavor of the month"


> I have never had a static type checker (regardless of how sophisticated it is) help me prevent anything more than an obvious error (which should be caught in testing anyway).

"I have never worked in a non-trivial system that changed over time."

FTFY


> That got me through a bachelor’s degree in Computer Science, but it always left me wanting for more expressiveness in my programs.

I share this sentiment, and I think a lot of it also comes from Rich Hickey talking about it.

As I relate to what this article expresses, I've noticed that people who write lisp type stuff talk about programming in a different way. In some sense there is an academic feel to it. In another sense there is a maker feel to it.

I think Dan Friedman might have said something similar about static typing not helping too much, but saw dependent types as something interesting


When I look at languages I use regularly (largely Emacs), Lisp is the one I'd love to try to build something large with. It feels quite powerful and the syntax is pretty easy to understand. It could just be my personal lack of experience with Lisp, but there seems to be little standard in naming conventions and code formatting. This may be where a lot of developers look at Lisp code and declare it an unmaintainable mess. Often I've seen multiple styles of formatting in the same file when I look at non-emacs production code.


Emacs is a bit special; a significant number of practitioners are hobbyists in ELisp and not professional programmers (which is a great thing!). I don't judge the quality of C# code by looking at amateur Unity scripts for example.

Depending on your flavour of Lisp, there are a /ton/ of naming conventions. Here's a good reference for Scheme/Racket [1], under the section "Names".

[1] https://docs.racket-lang.org/style/Textual_Matters.html


> But it has now taken a new interpretation in the last couple of decades: Static typing is a form of compile-time error checking, so it will help you produce better quality code. It is as if static typing is a magical theorem prover that will verify some deep properties of your program. This is where I call bullsh*t.[sic] I have never had a static type checker (regardless of how sophisticated it is) help me prevent anything more than an obvious error (which should be caught in testing anyway).

That's an anecdote; facts are that serious critical bugs that lead to privilege escalation and loss of data have existed in codebases that would surely have been caught by various static type systems.

> In other words, static typing is pointless. It has, maybe, some documentary value, but it does not substitute documentation on other invariants. For example, your invariant might be something like I’m expecting here a monotonically increasing array of numbers with a mean value of such and such and a standard deviation of such and such. The best any static type checking will let you do is “array[float]”. The rest of your invariant must be expressed in words as a documentation of that function. So why subject yourself to the misery of “array[float]”?

With dependent typing, it's actually quite possible to both define a type for “sorted list” and prove to the type checker that a sorting algorithm does sort it.


> That's an anecdote; facts are that serious critical bugs that lead to privilege escalation and loss of data have existed in codebases that would surely have been caught by various static type systems.

An anecdote at this point is at least better than a baseless claim as you're making.

Also keep in mind the article is in the context of Lisp, so it would be static types on top of a garbage collected memory managed runtime with runtime type checks.


> That's an anecdote; facts are that serious critical bugs that lead to privilege escalation and loss of data have existed in codebases that would surely have been caught by various static type systems.

Do you have any non-strawman examples of this (e.g. C security holes are strawman).


Given who you are, I'm going to give you the benefit of the doubt, but... What? C security holes are strawman? What is your basis for saying so?


Firstly, C is statically typed, so if you're arguing for static typing as the cure for security holes, citing C examples of security hoes won't speak well for your argument Secondly, C has an incomplete static system, which has no run-time safety net to compensate for it.

Many properties of C programs go unchecked. What is checked is easily defeated with type casts. Even the numeric conversions lack safety; e.g. floating point to integer conversion is silently implicit, with undefined behavior if the value is out of range.

All of that is a straw man example if we are arguing against incomplete static type systems, because the iron man is incomplete static with strong dynamic checks.

C also doesn't provide access to the language at compile time; we cannot execute test cases for the code in the same breath as compiling it. If code is cross-compiled, it needs unit tests compiled into individual executable programs, and an emulator like QEMU to run them.

Because C doesn't have run-time safety, unit tests do not reliably flush out type errors; programs with type errors can pass unit tests by fluke due to undefined behavior. E.g. a string that is not properly null terminated can have a zero byte in the right place anyway during the execution of a test case, and so on.

C security holes are not a good example to invoke in a static/dynamic debate as something that could be prevented with static, so I'm asking that: if those are the examples, please spare the debate.


OK, now I understand your point. C's static type system is inadequate to prevent C's security issues - that's kind of self-evident, when you put it that way. [Edit: More precisely, C's type system is inadequate to prevent the kinds of security errors common in C code.]

What's your stance on a stronger static type system being able to prevent them?


> Languages based on the λ-calculus make it really easy to “play back the code” in your head.

This really depends on the expression.. I've seen behemoth expressions such that you have no choice but write down intermediate results, or start explicitly breaking it down into sub-bindings so you could step through the code. And yes, you can usually step through subexpressions in the debugger, but if you have to debug anyway, it hardly matters that you can 'play back' the code in your head.

I also noticed there is a tendency in lispy languages to avoid introducing variable bindings just for the sake of naming the subexpression, because it often comes with a `(let ((name (subexpression))) rest-of-code)`, which also results in extra tabulation for the rest of the body. Compare with variable introduction in languages like python/js/c++/rust, etc., where such thing wouldn't cause touching the rest of the function/clause. I guess it's more of a 'functional language' artifact, e.g. in Haskell you'd probably have a similar issue.

The simple rules don't remove the complexity, the evaluation state has to live somewhere.

> I have never had a static type checker (regardless of how sophisticated it is) help me prevent anything more than an obvious error (which should be caught in testing anyway).

Err, really? Never dereferenced a null pointer, or tried using a list instead of a set? Static types are testing, it doesn't replace it, but massively saves mechanical work on writing dozens of dumb tests. Having tests is very important for documentation, regressions, end-to-end/integartion testing, but every test for something stupid as 'throws on passing null' is just a time drain (unless you're launching rockets or something of course).

Anyway, it's possible to have the best of both worlds by using gradual typing like mypy or JS flow -- you get all the runtime benefits (flexibility/ability to temporary violate invariants) and you gradually harden code which makes sense to type. I really wish Elisp (the lisp I'm mostly dealing with) had some sort of types, at least for simple things list 'associative list', 'string', 'function reference', 'nullable thing', this would massively save me time on catching bugs. It has runtime type checks for `defcustom` things, but never seen type checking anywhere else (except for occasional runtime asserts).

> The biggest advantage of this form of syntax is a form of minimalism — you don’t need spurious syntactic constructs to convey concepts.

> call these things macros, or syntactic extensions. In other words, you can extend the syntax of your language to introduce new abstractions.

I find these two sentences a bit contradictory ;) Anyway, I personally like macros, if used sparingly it can really help. One great upside of s-expressions for me is that you can do some cool things like 'find and replace' for whole subexpressions (for monkey patching third party code, for example). I use `el-patch` [0] in my emacs config and `advice-patch` for surgically changing the default behaviors of some org-mode functions to compile my blog [1].

That said a similar sort of thing is possible, for example, in python with `patchy` [2], and perhaps many other languages? But I guess it's not as organic as in lisps, e.g. `advice-patch` implementation is less than 100 LOC, whereas in case of python you have to rely on existing heavy lifting done by `ast` module.

As of simple syntax, it really gets in the way sometimes, e.g. I'm always annoyed by constant quoting in Elisp because the same type of brackets (only `()`) is used. In comparison, in Clojure it's much more readable with (), [], {} (and more).

> Lisp is not an interpreted language. It is not slow

A bit of nitpicking, but... which Lisp? :) Anyway, these days it's often meaningless to say 'slow' without having a specific workload in mind and having done benchmarking; hardware improvements make it very hard to reason about.

> all implementations come with lots and lots of levers to tweak performance for most programs. In some cases the programs might need assistance from faster languages like C and C++ because they are closer to the hardware, but with faster hardware, even that difference is becoming irrelevant.

Well, this is true of most languages.

[0] https://github.com/raxod502/el-patch#el-patch

[1] https://github.com/karlicoss/beepb00p/blob/a4fd7cb95e1705412...

[2] https://github.com/adamchainz/patchy#patchy


> I also noticed there is a tendency in lispy languages to avoid introducing variable bindings just for the sake of naming the subexpression, because it often comes with a `(let ((name (subexpression))) rest-of-code)`, which also results in extra tabulation for the rest of the body. Compare with variable introduction in languages like python/js/c++/rust, etc., where such thing wouldn't cause touching the rest of the function/clause. I guess it's more of a 'functional language' artifact, e.g. in Haskell you'd probably have a similar issue.

I have found myself thinking about "Should I really introduce another let form for naming these things I am going to use at the next procedure call, to make it more readable? Or would it make the code less readable, because of the additional indent?". So there is something to this point. Right now, while not writing code, I would say:

The indentation make the scope of the bindings clear. It tells you exactly, until where a binding will be defined. It also allows you to shadow a binding from an outer scope for the purpose of using it in the inner scope. However shadowing should be used with care, because it can also get in the way of readability, if used wrongly. It can however also increase readability, if used well.

In a language like Python, you can only write things at the same level of indentation. It does not make it clear until where these bindings might possibly be used. The scopes are not as separated as in Lispy languages with the let form. That inherently makes it not as readable, I think. "The rest of the function" is not a very precise scope specification. Many bindings might only be used for one or a few subexpressions and not everywhere until the function ends. This only makes sense though, if the code has side effects. Otherwise there will be no difference in where the bindings still exist, because what in Python is the rest of the "function", will be the inner parts of the expression of a procedure in Lispy languages.

I have found myself looking for a thing like let in Python though. I have sometimes thought: "Hey, can't I use a with-block for making a temporary binding?", but that has never worked for me, because I think you need to make a context manager for that? It has never materialized in my style of Python code. (I know Hy, but until it can offer me TCO and lambdas and let forms like Scheme, I am not sure I would like to use it.)

EDIT: Ah and not to forget about let* which can often reduce the amount of indentation a lot. Sometimes I find it good to make use of it.


In some schemes, you can have defines also in expression context. You can even add it by redefining a couple of standard forms.

Our scheme of choice, guile, introduced it in guile 3.


I just stop myself when I detect tab-driven development. Readability and function have to trump aesthetic concerns about indentation level.


When would shadow binding help create more readable code? (I agree with everything else in your post)


I think it would do that, if in the inner scope, you only need a part of something from the outer scope. Lets say you have a list of things and they have the name according to what they are, lets use "things" as a placeholder. Then in the inner scope, you are still working with "things", just not all of the outer scope, but for reading the code of the inner scope, it does not matter. You are still working with "things". OK, this is quite abstract. I am thinking about it in something like nested named lets, where you make a functional update to something, that comes from the outer named let, inside the inner named let and then use it as an argument to a procedure call. Situations like that. And perhaps only, when it is difficult to name this subset of things anything else than the original name. If there is easily available another fitting name, then I would always go for that.

It does make renaming things harder though.


Yes it does make renaming harder, but if your compiler warns about unused variables it might save you more often than not.


Thanks


Anyone know of something like Rails for a Lisp?


Can someone please recommend a situation or common daily task that I can deal with using some version of lisp, in order to learn it little by little without forcing me to learn it all upfront and it is not just emacs? Like a shell replacement? Or some configuration manager? Or how do people actually learn lisp if it is not their job?


Perhaps Babashka[0]? It sounds like the closest thing to what you want =)...

Alternatively you could just use cursive[1] or vscode[2] and follow the REPL guide[3]?

There's Racket, which has it's own built in environment[4].

Perhaps you'd prefer Common Lisp[5] instead, I don't believe you need to use emacs?

There are options =)...

- [0]: https://github.com/babashka/babashka

- [1]: https://cursive-ide.com/

- [2]: https://marketplace.visualstudio.com/items?itemName=betterth...

- [3]: https://clojure.org/guides/repl/introduction

- [4]: https://docs.racket-lang.org/quick/

- [5]: http://www.gigamonkeys.com/book/


So basically clojure x 5 and racket x 1. But what can I actually do in practice day to day?


Well I'm suggesting more clojure things because I know clojure, I've used a little of racket and it was fun, as well as done very little common lisp =)...

I'm not sure what you mean by practice day to day? These are all things that you could work on in various ways on a daily basis if you want?

Or do you mean kata-like things?

If so there's 4clojure[0] or codewars[1], which does racket[2] and clojure[3] alongside a host of other stuff.

If instead you're looking at a suggestion of what to work on, just pick a small project and regularly work on it, I've done lots of small data processing tasks, as well as developing small games, pick anything that interests you and play with it.

I mean Mazes for Programmers[4] is a great book, perhaps rewrite it in any language that interests you?

If that's not quite what you want, can you please clarify?

- [0]: https://www.4clojure.com/problems

- [1]: https://www.codewars.com/

- [2]: https://www.codewars.com/?language=racket

- [3]: https://www.codewars.com/?language=clojure

- [4]: https://pragprog.com/titles/jbmaze/mazes-for-programmers/


Thank you, all your recommendations are well received. I actually program a lot, these days in python for data science and I can do pretty much anything with it. But I am drawn to lisp and started the SICP book. But other than really personal projects and self explorations (which I also did with haskell, julia, rust, coq) I struggle to find a way to use a form of lisp for something I can do in work every day other than configuring emacs (which is not as bad). I think I have to agree with your point that clojure is the most practical option and fit for practical day to day use, cause it is lisp with all lava libraries. In the end it is the libraries... I guess my question should have really been what lisp based frameworks are there for data science, maybe.


Oh! Well perhaps you'd like this[0]?

The library author wrote two books as well[1][2].

Full disclosure, I've not used it, primarily because I've not yet had a chance to try it =)...

- [0]: https://neanderthal.uncomplicate.org/

- [1]: https://aiprobook.com/numerical-linear-algebra-for-programme...

- [2]: https://aiprobook.com/deep-learning-for-programmers/


> As a programmer, I carry around invariants (which is a fancy name for properties about things in my program) in my head all the time.

The author has probably not worked on large code bases or even in teams.

I often forget even my own code's purpose six months after having written it, let alone code written by teams of hundreds of people and millions of lines of code.

Dynamically types will kill you in such (common) situations.

> In other words, static typing is pointless.

Yes, author is definitely very young.

There are two types of programmers: programmers who know that static typing is the only sane way to write robust, scalable, maintainable code, and programmers who haven't been in this profession for long enough.


> I started serious programming in my teens in BASIC on a ZX Spectrum+, although I had previously dabbled in (hand-) writing Fortran programs

> I ended up studying Programming Languages at Indiana University with Dan Friedman (of The Little Lisper / The Little Schemer fame). It was my introduction to Scheme (and the world of Lisp.) I finally knew that I had found the perfect medium to express my programs in. And it has not changed in those last 25 years

Author seems to have over 25 years of experience.


> Author seems to have over 25 years of experience.

That doesn't mean much. The author has been a researcher, and then moved into management, now working on a relatively simple e-commerce project with one of the most hilarious mission statements I've ever read.

It's the kind of guy that makes me _shudder_ when he shows up in a workplace - because I already know I'm going to have to clean up after them. His dismissal of the documentation value of type systems really tells you how much experience he _actually_ has - my comeback to static typing, as is for nearly every programmer I know, specifically because of how reliable things like "look up the source/documentation/whatever" can be in statically typed code. Not everyone cares, and that's fine, but such a dismissal tells me he's very sheltered, and vastly overestimates the real complexity of his own projext.

Also apparently he worked on AspectJ, which to me tells me everything I need to know, and also probably explains why he hates the JVM even more than I do.


It's possible to have 25 years of experience but if they're the same year repeated 25 times, then it's still a year :-)


Yes, but it makes you look like a fool if your argument against theirs is that they must be a newbie, when in fact they are a seasoned developer.

Instead take action to support your statements or opinions and put up some convincing argument.


I have 25+ years of professional experience and disagree with the author.


Every time I try out a dynamic language, I miss static types. I just love them so much for refactoring and the ability to more fully describe how components should interact with one another.

Then when I factor in the incredible performance and efficiency of a language like Rust, thinking about all the extremely difficult performance problems I’ve had to diagnose in even fast JIT languages like C# or modern JavaScript runtimes, I just can’t stomach the idea of choosing a language like that if I have another choice.

So maybe Lisp would be a revelation, but I just can’t bring myself to care.


>"...learning Scheme ..., ...You will, however, be a much better programmer if you do and you will come to appreciate the beauty of these languages...

Sorry but this could be said just about any language as this is very personal opinion.

I for one think that knowledge of any particular language does not define being "much better programmer". Rather it is the good understanding of a general concepts and knowing how computers work.

Author's opinion about performance is also one sided. For some tasks it matters not yet it is never good enough for others.


> Call-by-value

> Mostly Functional

> Dynamically Typed

Sounds a bit like POSIX shell.

Yeah, I know that especially people doing functional programming do not want to be compared to shell scripters, but since I know how horrible shell scripting is, I wonder if there could be a future, where a proper, mostly functional language could be the successor of the very practical bash. I mean, if we aren't doing anything soon, JavaScript might take that place...


Maybe have a look at Closh [1], a clojure based shell

[1] https://github.com/dundalek/closh


I think my problem is less, that there are no alternatives you can setup yourself, but more, that there is no useful standard ;-)

But having a list of good alternatives if probably good too.


POSIX shells are not as powerful as Lisp REPLs and Smalltalk transcript windows.


Seems like a lot of lisp fans here. Can you recommend any lisp repo to learn list "good practices" and "design patterns" from?


Currently, lisp is not a single language, but a family of similiar languages that originate from the LISP.

The languages' inherent flexibility gives you too much freedom of expression to handle, especially for those who come with experience of strict languages. The minimalism of syntax and the ability to create in-effect a (sub-)language of your own also adds to that expressivity.

But we need guards to protect us from the freedom of too much expressivity, see for example defmacro vs. syntax-rules/syntax-case etc.

From my perspective, I want a language which gives me freedom when I want that, protection (''guardrails'') when I want that too, both if possible at the same time.

On invariants, what dependent typing and property testing are giving us now? How is the room of improvement on that matter?


I find it odd that the author qualifies Haskell as ugly and its ancestor Miranda as very beautiful as the two seem pretty similar in syntax to me. I find beauty in both.

I would only qualify Template Haskell as ugly.


I've installed portacle, and ended up in a nest of errors I didn't understand.

Can anyone here recommend a non-editor based version of lisp?


what is a editor based version of Lisp?

fwiw, I really like Racket. It's definitely the most clean/elegant Lisp around.


Thank you!

Racket gives me error messages, which I can learn to understand, and doesn't take me into Emacs like portacle does.

The tutorial helps too... more thanks!


if lisp is so good, where all the libraries to create cool stuff?


What libraries are missing?


Why I’m a bad programmer


Here we go again. Another LISP fan boy articles. And the usual HN responses/arguments. LISP is and has always been a niche language with limited impact on the world. It was fun to learn and play with but I don’t want to use it for real work. A few will disagree. That’s fine. You do what you prefer and best of luck.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: