Hacker News new | past | comments | ask | show | jobs | submit login

I found Common Lisp to be surprisingly ahead of its time in many regards (debugging, repl, compilation and execution speed, metaprogramming), but unfortunately it doesn't have a large community, and it's showing its age (no standard package management, threading not built into the language). It's also dynamically typed which disqualifies it for large collaborative projects.



It has great package management with https://www.quicklisp.org/beta/ and some truly great and high quality libraries, especially Fukamachi's suite of web libraries and so many others. Woo, for example, is the fastest web server. https://github.com/fukamachi/woo (Faster then the runner up Go by quite a bit)

For parallel computing, we use: https://lparallel.org/ Its been great at handling massive loads accross all processors elegantly. And then for locking against overwrites on highly parallel database transactions we use mutex locks that are built into the http://sbcl.org/ compiler with very handy macros.


Slightly off-topic but I'm in awe of Fukamachi's repos. That one person has built so much incredible stuff is amazing to me. Not sure it's enough to get me using CL all the time, but it's very impressive.


The math library we use is incredibly fast with quaternion matrix transformations: https://github.com/cbaggers/rtg-math/

The only gaps we've had with our production code and lisp is PDF (we use the java pdfbox), translating between RDF formats (also a java lib) and encrypting JWP tokens for PKCE dPop authentication (also java)

The complete conceputal AI system and space/time causal systems digital twin technology is all in common lisp (sbcl)

Also fantastic is the sb-profile library in sbcl that lets you profile any number of functions and see number of iterations and time used as well as consing all ordered by slowest cummulative time. That feature has been key on finding those functions that are slow and optimizing leading to orders of magnitude speed improvements.


are you able to go into detail as to what sort of AI technology you are using ? when you mention causal systems do you mean causal inference ?


We basically build a space/time model of the world, where systems are performing functions in events that take input states and change them into output states such that those events either causally trigger each other or the causal link that the input states to an event means that the event outputting that states is the cause of the current event.

The conceptual AI models operational concepts based on an understanding on how human concepts work, inference and automatic classification using those concepts, and then learning new concepts. The operational side of the digital twin uses functional specifications held elsewhere, which is also true of the operational concepts which use specifications in the form of conceptual definitions.

And the technology takes in RDF graph as data for input, builds the digital twin model from that data with extensive infererence, then expresses itself back out with RDF graph data. (Making https://solidproject.org/ the ideal protocol for us where each pod is a digital-twin of something)


Do you have links to more information?


We have a beta running on that framework: https://graphmetrix.com The live app and Pod Server is at https://trinpod.us

We are working toward commercial launch in the coming weeks. (We are adding Project Pods, Business Pods, Site Pods with harvesting the sematic parse we do of PDFs into the pod, so we handle very big data)


I don't really consider quicklisp to be "great package management" since you have to download it, install it, then load it. And don't forget to modify your sbcl init script to load it automatically for you. It felt quite cumbersome to get started using it, even though it was simple enough after that. Rust has truly great package management in my opinion. I run one command to install Rust and I immediately have access to all crates in crates.io.

EDIT: It's kind of ironic for me to make this claim since I use Emacs as my editor...


> It's also dynamically typed which disqualifies it for large collaborative projects.

I've been around the block for long enough to see how far the pendulum swings on this one. I'm guessing that it starts going the other way soon.


In my opinion after years in the industry, the benefits of type safety are too compelling and well known, to the point that I don't even feel like debating it. It's not a fad that will change periodically.


Its a fad that has changed periodically, though I think the convergence of static and dynamically typed languages to include dynamic holes in the former and gradual typing or optional static typecheckers for the latter will continue to reduce the significance of the current state of the fad on language selection in practice.

It probably won't reduce the intensity of the way a small minority of the community treats the language-level difference in holy wars, though.


I'm not following, could you elaborate?


Dynamic/static typing came in/out of fashion several times already. Any trend is temporary; neither kind of type system is a help or impediment in collaboration.


I think it originally tanked as a backlash against how verbose and painful it was in Java and friends (as well as the limited benefits because of things like nullability)

Modern static type systems are a totally different beast: inference and duck-typing cut down on noise/boilerplate, and the number of bugs that can be caught is dramatically higher thanks to maybe-types, tagged unions/exhaustiveness checking, etc. I think we've circled around to a happy best-of-both-worlds and the pendulum is settling there.


Common Lisp was on the right track with gradual typing and decent inference, which makes a nice compromise compared to jumping static hoops to wrap the problem around a more rigid language.

The type declaration syntax definitely could use some love, I think it's a shame a more convenient syntax was never standardized. And sum types etc would be nice of course. It's all perfectly possible.


I agree with everything you said here.

However, you have to consider that Common Lisp itself is quite different from other dynamically typed languages.

I find that, after the initial adjustment period with the language (which is significant, I admit), it's surprisingly hard to write messy code in CL, certainly harder than in Python or Ruby. At the very least, the temptation to do so is lower, because there are fewer obstacles to expressing sophisticated ideas succinctly.

And no, I am not talking about the ability to define your own macros and create DSLs. I think it has to do with the extensive selection of tools for creating short-lived local bindings, the huge selection of tools for flow control, and the strict distinction between dynamic and lexical variables.

There's just something about it that sets it apart from other dynamically-typed languages, even without the gradual typing aspect and even without the speed difference. Navigating a source codebase in Python without strict type annotations is like navigating in the dark in a swamp. I don't have the same issues in Common Lisp for the most part.

Maybe this has more to do with undisciplined programmers self-selecting out of CL than it has to do with any aspect of CL itself.

And on top of the excellent and unique language design, you have:

* A powerful CFFI

* An official specification

* The "REPL-driven" development style (if you want it)

* Several well-maintained implementations that generate high-performance machine code

* The unique condition system

* Literally decades of backward compatibility

* A core of stable, well-designed packages, including bindings to a lot of "foundational" libraries

* Macros if you really do want to invent your own syntax or DSL

Probably the only big downside is that the developer ecosystem is still focused around Emacs. That too is changing gradually but steadily, with Slyblime (SLY/SLYNK ported to Sublime Text), Slimv and Vlime (Vim ports of SLIME/SWANK), the free version of LispWorks for light-duty stuff, and at least one Jupyter kernel.

Also, Roswell (like Rbenv or Pyenv) and Qlot or CLPM (like Bundler or Pipenv) help create a "project-local" dev experience that's similar to how things are done in other language ecosystems.

And of course there is Quicklisp itself, which is a rock solid piece of software, and fast too!

Python and Ruby have their own merits, for sure, and there are plenty of things I have in Python that I wish I had in CL. But it really doesn't seem right to compare them, CL seems like a totally different category of language.


I think the tradition of using long-descriptive-names in Common Lisp for all identifiers cements much of that experience. Using 1-3 character variable names feels natural in C but (outside of most trivial use circumstances) faux pas in Common Lisp. A better vocabulary allows for clearer formulation of nature and intent of the constructs, improving readability greatly.


Line noise is a red herring in the static/dynamic comparison. You will still run into serious problems trying to shove ugly human-generated data into your nice clean type system.

For mechanical things where you the programmer are building the abstractions (compilers, operating systems, drivers) this is a non-issue, but for dealing with the ugly real world dynamic is still the way to go.


I'm not sure I understand what makes dynamic typing better for handling real-world data. Yes, the data is messy, but your program still has to be prepared to handle data in a specific shape or shapes. If you need to handle multiple shapes of data, e.g. varying JSON structures, you can still do that with static types, using sum types and pattern matching.


Most modern static languages have some way to hold onto a dynamically typed object, stuff them into containers of that dynamic type and do some basic type reflection to dispatch messy dynamic stuff off into the rest of the statically typed program. Sometimes it does feel fairly bolted on, but the problem of JSON parsing tends to force them all to have some reasonably productive way of handling that kind of data.


Yes but this same argument works the other way, dynamically typed languages can do a half-assed impression of static languages as well. So its a tradeoff depending on your domain.


Having programmed for 10 years in a fully dynamic language though I think I prefer the other way around. You tend to wind up documenting all your types anyway one way or another either with doc comments or policies around naming parameters, and wind up building runtime type validation systems. Statically typed languages with cheats seem like it gets you to the right sort of balance much sooner.


The right balance really depends on your domain. The reason I'm so big on dynamic typing is because the most important part of the product I work on is a ton of reporting from an SQL database. I shift as much work as possible to the database, so the results that come back don't need to be unpacked into some domain model but are ready to go for outputting to the user. If I tried to do this in a static language I'd have a new type for every single query, then have to convince my various utility functions to work with my type zoo.


People seem to interpret blacktriangle's post in a parser setting. I don't know why, but if you're writing parsers, you're explicitly falling into the category he's mentioning where static types make a lot of sense.

GP's claim was that Java was too verbose. But verbosity isn't really the problem. There are tools for dealing with it. The problem is a proliferation of concepts.

A lot of business applications goes like this: Take a myriad of input through a complicated UI, transform it a bit and send it to somewhere else. With very accurate typing and a very messy setting (say, there's a basic model with a few concepts in it, and then 27 exceptions), you may end up modeling snowflakes with your types instead of thinking about how to actually solve the problem.


If you're referring to the "parse, don't validate" article, it's using the word in a different sense. The idea is that you make a data model that can only represent valid values, and then the code that handles foreign data transforms it into that type for downstream code to handle instead of just returning a boolean that says "yep, this data's fine"


Right, but where this gets obnoxious is when you're writing code at the "edge", where customers can send you data, and the formats which you accept and process can change wholly and frequently. I've dealt with this problem before in a Scala setting where we were created sealed traits to have Value classes for each of our input types, but it was obnoxious enough that adding a new form of input was pretty costly from an implementation time perspective, enough that handling a new input format was something we planned explicitly for as a team. Sure, you could circumvent this by using something like Rust serde_json's Json Value type, but then you're basically rolling an unergonomic form of what you could do in a couple lines of Python.

I've mostly come to the conclusion that dynamic languages work well wherever business requirements change frequently and codepaths are wide but shallow (e.g. many different codepaths but none of them are particularly involved). Static languages work better for codepaths that are narrow but deep, where careful parsing at API edges and effective type-level modelling of data can create high-confidence software; in these situations the logic is often complicated enough where requirements just can't change that frequently. I wish we had a "best of both world" style to help where you have wide and deep codepaths, but alas that'll have to wait for more PLT (and probably a time when we aren't forming silly wars over dynamic vs static typing as if one was wholly superior than the other.)


I've found this to be a non-issue in Clojure with specification validation. Some call this gradual typing


This has not been my experience.


Alexis King's "Parse, don't validate" is pretty much the final word on using type systems to deal with "messy real world data": https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...

tl;dr when used properly, static type systems are an enormous advantage when dealing with data from the real world because you can write a total function that accepts unstructured data like a string or byte stream and returns either a successful result with a parsed data structure, a partial result, or a value that indicates failure, without having to do a separate validation step at all -- and the type system will check that all your intermediate results are correct, type-wise.


I’ve been using the techniques in that article for years in JavaScript, CL and Clojure. While static types are a notable part of it, the more important point is just learning to design your systems to turn incoming data to domain objects as soon as possible.


There are runtime analogs for most of the modeling techniques people use in statically typed languages.


You're not the first, nor fourth for that matter, person to respond to dynamic typing advocation with that blog post, and it's an interesting post but it misses the whole point. The problem is not enforcing rules on data coming in and out of the system. The problem is that I have perfectly valid collections of data that I want to shove through an information processing pipeline while preserving much of the original data and static typing systems make this very powerful and legitimate task a nightmare.


Not a nightmare at all. For example, if you're doing JSON processing in Rust, serde_json gives you great tools for this. The serde_json::Value type can represent any JSON value. You parse JSON to specific typed structures when you want to parse-and-validate and detect errors (using #[derive(Serialize, Deserialize)] to autogenerate the code), but those structures can include islands of arbitrary serde_json::Values. You can also parse JSON objects into structs that contain a set of typed fields with all other "unknown" fields collected into a dynamic structure, so those extra fields are reserialized when necessary --- see https://serde.rs/attr-flatten.html for an example.


> The problem is that I have perfectly valid collections of data (...) and static typing systems make this very powerful and legitimate task a nightmare.

What leads you to believe that static typing turns a task that essencially boils down to input validation "a nightmare"?

From my perspective, with static typing that task is a treat and all headaches that come with dynamic typing simply vanish.

Take for example Typescript. Between type assertion functions, type guards, optional types and union types, inferring types from any object is a trivial task with clean code enforced by the compiler itself.


Presumably the GP's data is external and therefore not checkable or inferrable by typescript. This makes the task less ideal, but still perfectly doable via validation code or highly agnostic typing


> Presumably the GP's data is external and therefore not checkable or inferrable by typescript.

There is no such thing as external data that is not checkable or inferable by typescript. That's what type assertion functions and type guards are for.

With typescript, you can take in an instance of type any, pass it to a type assertion function or a type guard, and depending on the outcome either narrow it to a specific type or throw an error.


You said:

> inferring types from any object is a trivial task

This is true for values defined in code, but TypeScript cannot directly see data that comes in from eg. an API, and so can't infer types from it. You can give the data types yourself, and you can even give it types based on validation logic that happens at runtime, and I think this is usually worth doing and not a huge burden if you use a library. But it's disingenuous to suggest that it's free.

The closest thing to "free" would be blindly asserting the data's type, which is very dangerous and IMO usually worse than not having static types at all, because it gives you a false sense of security:

  const someApiData: any = { foo: 'bar' }

  function doSomethingWith(x: ApiData) {
    return x.bar + 12
  }

  type ApiData = {
    foo: string,
    bar: number
  }

  // no typescript errors!
  doSomethingWith(someApiData as ApiData)
The better approach is to use something like io-ts to safely "parse" the data into a type at runtime. But, again, this is not without overhead.


> This is true for values defined in code, but TypeScript cannot directly see data that comes in from eg. an API, and so can't infer types from it.

No, that's not right at all. TypeScript allows you to determine the exact type of an object in any code path through type assertions and type guards.

With TypeScript you can get an any instance from wherever, apply your checks, and from thereon either throw an error or narrow your any object into whatever type you're interested in.

I really do not know what leads you to believe that TypeScrip cannot handle static typing or input validation.


They don't, though.

For example: a technique I've used to work with arbitrary, unknown JSON values, is to type them as a union of primitives + arrays of json values + objects of json values. And then I can pick these values apart in a way that's totally safe while making no dangerous assumptions about their contents.

Of course this opens the door for lots of potential mistakes (though runtime errors at least are impossible), but it's 100% compatible with any statically-typed language that has unions.


At the edges sure, but why allow that messiness to pervade the system instead of isolating it to the data consuming/producing interfaces?


You still have to deal with the ugliness in a dynamic language too, but you might be tempted to just let duck typing do its thing, which could lead to disastrous results. Otherwise, you'll have to check types and invariants, at which point you might as well parse the input into type-safe containers.


Can you elaborate on that? As I see it dynamic was popular in the 90's (Python, JS, Ruby), but outside of that it's always been pretty much dynamic for scripting and static for everything else.


Consider that first Fortran (statically typed) and Lisp (dynamic) implementations date back to late 1950s. Since then there was a constant tug of war between these camps, including BASIC, COBOL, Smalltalk, Pascal, and trends falling in and out of favour.

All this however is rather orthogonal to the strengths of type systems. CL type system, for instance, is stronger than one of Java or C.


None of those languages were popular in the 90s.


> None of those languages were popular in the 90s.

JS was, because browsers. Python was starting to be toward the end of the 90s. Ruby (as I understand) was in Japan though it wasn't until Rails took off that it became popular elsewhere. Perl (not on the list but similar to those on the list) definitely was.


Pearl was!


common lisp supports type annotations. there is even an ML-type language impletmented in it (see coalton). quick lisp [1] is used for package managment, bordeaux-threads [2] for threading.

1. https://www.quicklisp.org/index.html

2. https://github.com/sionescu/bordeaux-threads


Common lisp has a smaller community than the currently most popular languages, but I'm consistently impressed by the range of and quality of libraries the community has created (despite all the "beta" disclaimers) [1]

Regarding type-checking, common lisp is expressive enough to support an ML dialect (see coalton), and is easily extended across paradigms [2]

1. https://project-awesome.org/CodyReichert/awesome-cl

2. https://coalton-lang.github.io/


> It's also dynamically typed which disqualifies it for large collaborative projects.

Like Github or WordPress?


.. are those considered "good" ? github is meh at best considering it's 13yo and the billions of dollars poured into it, and wordpress, I don't think anyone can reasonably say that it's a sane software. They are both good arguments against dynamic typing imho (especially the latter).


If what you're saying is true and those software are mediocre at best, this implies that software quality and success have no correlation. Or perhaps that software quality is not at all what the users of this site tend to think.


> this implies that software quality and success have no correlation.

I mean, like in most other fields no ? Most successful movies, books, foods, artworks, furnitures, ... are fairly different from the best ones.


But the least successful art is in every measure bad. I mean for example the movies that score 1.0-2.0 in imdb. But yeah, perhaps the comic book movies are not unlike Wordpress.


Quality of code is not necessary for success, nor does it guarantee it.

It does improve your quality of life as an engineer, I can promise you that.


> It's also dynamically typed which disqualifies it for large collaborative projects.

That’s a quite absolute statement. At least Erlang/Elixir users would tend to disagree. “Dynamically typed” can still represent a huge variety of approaches, and doesn’t have to always look like writing vanilla JavaScript for example.


I hear your argument as "there exist dynamically typed languages therefore the benefits of typing don't matter". To be positive I'll say that it's better than the pendulum argument.

I'm aware that there exist dynamically typed languages in which large projects are written, I'm saying that they would be better off with type safety.


> It's also dynamically typed which disqualifies it for large collaborative projects.

You can add type declarations and a good compiler will check against them at compile time: https://medium.com/@MartinCracauer/static-type-checking-in-t...


It is non-optionally strongly typed.

It is optionally as statically typed as you want, depending on what compiler you use, I am mostly familiar with SBCL, which does a fair bit of type inference and will tell you at length where your lack of static typic means it will produce slower code.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: