Here's a thorough answer by someone who has worked a lot on types, optimizations and numerical libraries, and who still uses Python as his first language (for academic purposes):
> The unnecessarily-long-for-debugging compilation times certainly put me off. I think I do like the semantics of Julia better than Python, but the compilation times is a deal-breaker for me. Perhaps I might make the switch once this gets fixed in the upcoming years :)
I like Julia, but this is a point of frustration for me as well. Compilation time is fine for smaller scripts. But whenever I've tried using more powerful packages such as the SciML ones (DifferentialEquations.jl, DiffEqFlux.jl, Symbolics.jl), the precompilation time during the edit/debug cycle can be excruciating. It's a shame because the SciML packages are so amazing.
I'm waiting until Julia's static compilation process improves and is standardized. On-the-fly compilation that disappears every time you close the REPL will never be fast enough to give users a good experience when working with highly complex code.
EDIT: Just to be clear, I'm overall confident that things will improve. The folks at Julia have already made good progress in reducing precompilation times from where they used to be. Looking forward to seeing further advancements in the future.
Quote: "Coming very soon: a version of DifferentialEquations.jl that fully precompiles the solvers on Vector{Float64}, virtually eliminating the any #julialang #sciml JIT lag."
It's still rough. They've been at it for yrs and it's a perpetual bed of sand... A lot of machines will fail to install it because it's so resource intense to install... Promising idea overall, maybe in three yrs or so it'll be worth using for something outside of research.
How does that even work? Even with full precompilation, the LLVM time should still be there, no? In my usual Julia workloads, LLVM time is a significant fraction of compile time, so full precompilation only takes 30-50% off latency.
You can strip it out with a system image, and you don't even need more than the basic StaticArrays and Loop vectorization stuff in the image to get almost all of it
Nice, that Twitter announcement looks fantastic! Looking forward to trying out the new precompiled version.
And agreed that the folks at SciML (and the rest of Julia) have put amazing efforts into reducing the compilation lag from where it used to be :) I'm optimistic that things will improve--it'll just take some time.
Theyve been at it for half a decade or so. Ignoring compilation times they shuffle the code around so frequently it's only real use imo is for the authors to publish papers and stay three steps ahead of any of it's users hoping for a stable tool after a few cycles give up. It's a shame but it's academia at its finest.
This part of your criticism seems quite disingenuous. You're simultaneously criticizing them for not improving their code quickly enough, and for changing it too much.
I appreciate your view but seeing as how most Julia projects work this way I sometimes wonder if it's just a problem with the language itself. Not trying to be a troll with impossible expectations, but genuinely the code is unstable, and yes they have been working on it for a long time.
We started working on it at JuliaCon 2021, where it was at 22 seconds. See the issue that started the work:
https://github.com/SciML/DifferentialEquations.jl/issues/786. As you could see from the tweet, it's now at 0.1 seconds. That has been within one year.
Also, if you take a look at a tutorial, say the tutorial video from 2018,
https://youtu.be/KPEqYtEd-zY, you'll see that the code is still exactly the same an unbroken over the half decade. So no, compile times have only been worked on for about a year and code from half a decade ago still runs just fine.
I think you misunderstood me, all good. The diffeq/sciml landscape has been a WIP for half a decade with lots of pieces of it changing rapidly and regularly. But so has the rest of the ecosystem. I think we both know how often this code has changed, but for some reason the Julia people are always like "oh we have packages for that" or "oh that's rock solid" and then you check the package it's a flag plant and does nothing or is broken from a minor version change, then you try to use it, maybe even fix it, and it breaks Julia base... I'm not going to waste anymore time with digging into this to file an issue or prove a point.
I think passerbys should be made aware of the state of things in the language without spin from people making a living selling it. No personal offence to you, just please consider not overselling, it's damaging to people who jump in expecting a good experience.
I linked to you a video tutorial from 2018, https://www.youtube.com/watch?v=KPEqYtEd-zY . Can you show me what code from that tutorial has broken in the SciML/DiffEq landscape? I know that A_mul_B! changed to mul! when Julia v1.0 came out in 2018, but is there anything else that changed? Let's be concrete here. That's still the tutorial we show at the front of the docs, and from what I can tell it all still works other than that piece that changed in Julia (not DifferentialEquations.jl).
> I'm not going to waste anymore time with digging into this to file an issue or prove a point. > No personal offence to you, just please consider not overselling, it's damaging to people who jump in expecting a good experience.
I'm sorry, but non-concrete information isn't helpful to anyone. It's not helpful to the devs (what tutorial needs to be updated where?) and it's not helpful to passerbys (something changed according to somebody, what does that even mean?). I would be happy to add a backwards compatibility patch if there was some more clear clue.
> I think passerbys should be made aware of the state of things in the language without spin from people making a living selling it.
The DiffEq/SciML ecosystem is free and open source software. There is nobody making a living from selling it.
> I like Julia, but this is a point of frustration for me as well.
It's a point of frustration for most of the community, it's just something we're willing to put up with (and work around) for the other benefits of the language. If the tradeoff doesn't seem worth it to you, it's totally fine to wait it out until it's in a more acceptable place for you.
There's a lot of focus on improvements surrounding this in the recent and upcoming versions, precisely because of this frustration, but it's still going to be a gradual process.
> If I had to wish something from Julia, it would be to provide a way to turn off runtime optimization to (radically) speed up compile times for purposes of debugging.
100% agreed on that. I've tried a Julia alias with `--compile=min --optimize=0` options passed in to try to say "please give me responsiveness over runtime performance", but it's still not quite the smooth flow I'd like it to be.
> Dynamic Binding
Beyond performance, it sounds like dynamic binding would have the same hard-to-debug action-at-a-distance problems that global variables often land you in, so I'm not sure it's worth it. (The specific case the author mentions would also lead to type stability problems, but that's maybe beside the point.)
> Structural editing
It's hard to process things from gifs, especially since I can't tell what the starting point of the gif is. It vaguely gives me the impression of the Emmet plugin for HTML development [1].
> I am aware julia has a --lisp mode, but I have never found any documentation for it. So, I don’t agree that all the things in julia are well-documented either :).
Afaik, the `--lisp` mode is intended to be sort of an easter egg, rather than a real mode for practical coding. I doubt many people use it other than Jeff himself. :)
The author doesn't say everything in Julia is well-documented by the way, or even mention Julia documentation. There's just complaints about the Lisp ecosystem's lack of documentation, and perhaps from that an implication that Julia has better docs, but I doubt the author would say all the things are well-documented - there's still quite a way to go for that to be the case.
Overall, the article left me more curious to explore Lisp, not less. It didn't feel particularly gloomy, and exposed me to many features of the language that make me really want to try it out. I hope there's more articles like this - in the sense of being intended for a general (non-lisp) audience, and talking about specific features rather than just "it expands your mind, it's programming like you've never done before" type statements.
Nice response. Seems to mostly agree with OP. About the "long-for-debugging compilation times" argument, it is usually raised when running a fresh Julia session to test updated code. The expected workflow is actually more of keeping a single Julia session opened indefinitely and including updates in it. This workflow is enhanced further with Revise, and conceptually is similar to how people usually develop on MATLAB and R/RStudio. Python programmers will only really use this approach inside a Jupyter notebook.
Yea but keep in mind anytime you get a bug your session goes away so you can end up eating hours from a day precompiling. Not worth it with heavy pkgs imo. Wish they had incremental compilation like rust because most compiled Lang's are noticeably faster than Julia precompilation ime.
This only happens for Segmentation fault type crashes which kill the entire Julia process, which shouldn't be common at all. Can you describe when you experience these issues?
It's super easy to get ooms and seg. Faults in Julia. For a while there this past year you couldn't even Ctrl c to stop the Julia process. It's rough for real work imo. Fine for research.
I don't remember C-c being broken in the past year. Could it have just been a specific program you were running with a tight loop that didn't have any yield points? If so, that's not really unique to Julia.
I'd also be interested in your workload that was generating lots of seg faults (oom makes some sense if working with large data since Julia's runtime does add an unfortunate amount of memory overhead.
Check the version summaries. Think it was Julia 171 or something. There's lots of bugs like that that crop up every other release.
Segfaults happen all the time with FFI. But yea OOM is a killer. Julia runtime guzzles RAM, but pkg add any of the sciml stuff and watch your RAM explode. Doesn't take much data to lose a half hour of your life installing a package...
The RAM usage is because precompilation runs in parallel. If you have like 16 threads going then yes it'll parallelize that over 16 times. But we have never seen a half hour package install, can you share the info to reproduce it?
On my decade old laptop `using Plots` (precompilation) takes a couple of minutes. Not half an hour but it may feel like that if someone is used to Python (or R) where imports are instantaneous. Though I think GP meant takes that much due to OOMs which may result in frozen or slow system.
A couple of minutes is expected. Even R and Python packages have to run the build processes for the associated C and Fortran codes, and those take similar time (or more for many packages). However, if a Julia package is precompiling for more than half an hour, that's not expected and I'd like to see a reproducer for this so we can fix it.
It's not a half hour package install. It's a half hour of loading everything back in to get to where you were because the runtime dropped sorry for the lack of clarity. But yea, it's real easy for pkg installs to oom people.
Julia devs seem to work on improving precompilation times every version. That said if a project or workflow revolves around some packages, a way to skip this step is to create a sysimage, essentially a saved Julia session, that includes those packages. PackageCompiler significantly simplifies the process.
I wish Julia moved away from using abstract types and concrete types and moved away from "inheritance", and instead decided to tackle traits full on. Traits in Julia are SO powerful but currently are hacky or third party enabled with weird odd syntax. The built in abstract type system should really just be deprecated in favor of a generic trait system, because otherwise people will learn that way of programming which is (imho) a less expressive way to represent the same problem.
What are the chances of that happening for Julia 2.0? It feels unlikely. Julia is SO close to being an objectively perfect programming language. With more and more things being compiled ahead of time, it'll replace all my needs for Go. I still might use Rust though :)
I'm an avid Julia user and I 100% agree with that. Traits are far superior. They would not be too hard to add to Julia. However, I think the chances are slim of it happening in Julia.
My understanding is that this was being worked on by Jan Vitek and his team as a research project. Not sure how much communication his team has with the Julia compiler devs.
> Getting help is also often a problem. Experienced Common Lispers assume a basic understanding of the language, good style practice, and familiarity with Emacs and the associated Common Lisp tooling installed. Asking for help as a beginner, and posting a small snippet of code, more often than not results in a wall of text of replies asking the user to fix their style before they can consider helping. This is quite disuading as a newcomer, and detracts from the user's learning path. In extreme cases, which are not rare at all, the community can be quite inflammatory towards newcomer questions, as they often get very upset over incorrect terminology or improperly formatted code.
It’s a shame to see such posts as you were a cordial and enjoyable member of the IRC community for quite some time. For me, it’s one of the better IRC channels out there and I learnt a lot from the PROFESSORS of computer science who regularly helped newcomers in the channel #clschool. [lisp123]
Otherwise glad to see you have found something enjoyable and curious on how you have pivoted away from CLOS, maybe I should investigate Julia too
Stefan Karpinski (one of the core Julia contributors) gave a talk (in 2016, so rather dated as far as the Julia syntax and whatnot go) about how Julia is secretly more lisp-y than even lisp & Scheme. The core of this argument is that most language specifications dedicate many pages to defining how arithmetic works, whereas Julia is actually just able to define that in Julia code as part of the stdlib (the implementation just calls out to LLVM intrinsics, but nonetheless, super neat!). `Int64` is actually defined in a Julia file.
Defining parts of the language as executable code instead of prose is attractive. One drawback is over specification - the reference implementation may have inessential features that a prose version could leave to the implementor - but overall I think that risk is minor compared to having executable test cases/examples embedded in the standard.
Arbitrary size integers written in lisp is a thing. Maybe gambit and sbcl? Can't remember offhand. Calling down to llvm.sum.i64 or w/e is very similar to calling down to x64.
Julia's syntax is it's greatest strength but also it's biggest weakness. Large Julia code bases without team standards are complete soup. For small to medium size projects it's all good though. Just wish the community wasn't overall so crappy.
What do you mean by crappy? I've always found it very lovely and welcoming (though a touch small). The Julia community is also (in my experience) skewed towards scientific computing rather than software engineers which can definitely have an impact on things like "codebase quality" even in big important libraries. That's not a dig or insult – there are different priorities (privileging exploration, innovation, new ideas, and code that only needs to work until the paper you're writing is done rather than long term maintainablity is not a fundamentally wrong tradeoff to make).
I met a guy who got kicked away from the community because he had different political beliefs than a lot of the people there. Immediately after he got the boot he stopped contributing, they took control of his two yrs of research(multiple repos). I get it, it's OSS but at the end of the day it kinda looks like stealing someone's work. There's other instances of stuff like this too... Just hang around for a while and watch...
Not here to say it's all like that. But keep in mind if you aren't paying for the product, you probably are the product.
There's only one person who's been kicked out of the community (in the sense of being banned from the discourse forum), and that was not due to 'political beliefs', but repeated abuse and personal attacks.
Anyway, how can anyone 'take control' of someone's repositories? Was this person kicked out of github too?
Can't reply to you depth of discussion got too long I think. Yea they didn't get kicked out entirely not banned or anything but they became "persona non grata" over stuff in their personal life. At least that's how it was explained to me.
I dunno what you should do if someone leaves, but bullying someone until they leave, then after they do forking all their work is kinda crappy. Again I get that it's OSS, but a lot of people don't make OSS contributions and hope for that kind of outcome. It's worth putting out there imo.
If you click on the timestamp ("17 minutes ago"), then you will be able to reply below the post.
OK, I don't know that person's situation then, and cannot speak to it. But bullying is definitely against the community guidelines, and my experience is that there's not a high tolerance for rudeness, in fact I think the community is quite conflict shy.
That this happens with some frequency is a pretty big surprise to me, as I said, I follow the community closely.
Is it possible that you know only one side of the story?
From this description this is fairly obviously the LightGraphs situation, and that's a pretty misleading account of what happened. The person was not kicked out of anything—they were not blocked or banned from any platforms or forums. They choose, of their own volition, to stop participating in the community. I've never seen any evidence of bullying for political views or otherwise; maybe there was some, but if so it was never reported, and it would have been a clear and actionable community standards violation. Whatever their reasons, this person decided they wanted to leave, which is unfortunate—we don't want anyone to feel unwelcome—but it's their prerogative.
That would have been fine, if unfortunate, but they also wanted to "take their work with them" in the sense of archiving their registered open source package repos preventing any further maintenance or development. This desire was not about not wanting the maintenance burden—they were not willing to grant ownership of the repos to other maintainers. In short, the original author wanted to force all development of the packages by anyone to stop. Of course, that would have left all the people who had come to depend on those packages high and dry, since the code they'd come to depend on would get no bug fixes, security patches, etc. Despite the fact that there were active contributors to that code who were happy to take over maintenance.
Imagine if Linus Torvalds got mad one day and decided to insist that no one could do any further development of the Linux kernel. No bug fixes, no security patches, no new features. Linus out. That was the situation here. Fortunately this is not how open source works: open source licenses are not revokable and the ability to fork a project is baked into each license for this exact reason—so that a disgruntled author cannot screw over an entire community of people who have come to depend on their work. They don't have to keep doing work, but they also can't take away they've done. If Linus threw a tantrum and refused to allow any more work on Linux, the rest of the community could take over and continue maintaining the kernel—fixing bugs, patching security flaws, even adding new features. Linus could close down his git repo and never touch the kernel again, but other maintainers could continue to develop Linux and support the vast community of users who have come to depend on it.
Similarly, it would have been perfectly legal to fork LightGraphs and continue development in a new repo with the same name. Out of respect for the original author's wishes, however, the LightGraphs package was allowed to be "frozen" with no further development. But it would have been deeply irresponsible to cease all maintenance and leave all the people who use and depend on LightGraphs hanging, especially given that there were willing maintainers. So LightGraphs was forked and renamed to "Graphs"; the old repo has been allowed to remain frozen, while maintenance and development has continued under the Graphs name in a new repo. The author of LightGraphs got their wish for work on the thing called "LightGraphs" to cease. The users of the package didn't get screwed over since they can do a simple search and replace and keep using a maintained graphs package. Personally I think the community handled it with responsibility and grace.
There's another issue. In addition to the open source license and what it promises, when you accept contributions from others it isn't just your work anymore. LightGraphs had 100 contributors, what about their efforts? Not to mention additional work that others have done on top of that in other libraries.
Who would contribute to a software library if they knew that the main dev could just mothball their efforts at any moment.
If you have donated a ball, you can no longer just pick it up and go home. If you don't want to donate work, don't do open source and invite others to join in.
I don't know anyone else who got kicked out (that takes a lot). But I know of a situation where someone walked out due to a non-political disagreement. Perhaps that's the one.
I follow the community pretty closely, posters being banned (except pure spam accounts) is something I think I'd notice.
Besides, what can you do if someone walks away from an important package? Should everyone, including collaborators on that package just start from scratch? What?
> Writing performant code in Common Lisp is not for everyone, and it most certainly cannot be done portably; what might run fast on one implementation may run poorly on another. The moment you start writing implementation-specific code, in my honest opinion, you are better off using another programming language (which can also be considered writing implementation-specific code).
can you please elaborate on this conclusion - "better off using another language"? i find it interesting because i happen to fit into this category - ie writing performant code in cl (sbcl). besides sbcl no other implementation interests me (actually maybe CLASP) and i care very little if my code is portable; hence marked as cl (sbcl). sbcl allows me to do all sorts of optimizations that not even julia has. and cl with emacs+slime is to me by far the most enjoyable hacking experience
the fact that you are writing a graphics engine (which is hardware dependent if performance is what you want) suggests to me that you too should not spend so much time on portability. i think implementation specific libraries in cl should be welcomed. like with people, if you try to get everyone to like you you are not going to get very far
Most of his technical problems would be non-issues if he abandoned portability to different CL implementations: Just pick a single implementation (SBCL) and use it. Other languages don't have this problem because there's only a single widely-used implementation.
Yet, a lot of Common Lisp developers keep kneecapping themselves by imposing the portability constraint where it makes absolutely no sense whatsoever: Few will actually care in practice about the 3-4 other CL implementations that the code happens to "support" (usually, entirely untested) but the code will be bloated by the use of 3rd party half-assed underdocumented "portability" libraries.
I'm a newcomer to Common Lisp and I do agree to a certain extent with the "social problems" part of the OP critique. If I had to emphasize one, it'd be the lack of focus and polish (which includes documentation) in people's output. There are very few people in the opensource CL community that consistently produce good code and that's a major problem.
> Other languages don't have this problem because there's only a single widely-used implementation.
Not true for c/c++ (clang+clang/osx, gcc, msvc, icc - ed: see also stdlib, libc), Java (fewer now, but still many implantations of compiler and vm), Javascript, scheme, standard ml...
Arguably it's not even true for Python. Ruby has many implantations, but my impression is that mruby and jruby are mostly insignificant wrt the average ruby programmer. I would say pypy sees more real-world usage, maybe?
I have given up on portability a long time ago. I value the concision of Julia code, the "automatically fast by default due to the great JIT compiler and type inference engine", and the "everything is zero-runtime overhead generic".
Author here. I am a relative beginner to Julia, having only used it seriously for a few months. For those familiar with Common Lisp, I will try to answer any questions you may have in this thread, that weren't addressed in the article.
1. What's your own probability estimate that you'll be back to Lisp by, say, 2024?
2. Can you elaborate more on "they are very awkward to work with and only offer a subset of the abilities of the Emacs plugin" for vim? What did you find lacking about either slimv or vlime that you absolutely couldn't stand such that you forced yourself to use emacs? I'm most familiar with slimv and am aware of some quirks (and one bad still-open issue related to errors in threads which is annoying to deal with when it bites) and some limitations, but I'm fortunate to be mostly unbothered by them or in one case so far submitting a patch to fix one annoyance. At least, I'm not bothered to the level of abandoning vim -- I'll probably try the VS Code plugin before trying emacs again, or shell out for a LW license. Specifically it's things I see emacs users do like clicking a printed object's memory address to open the thing in the inspector, or having a slightly less ghetto code stepper. I'd rather have other things I miss from my Java life that as far as I know aren't even in emacs for CL.
3. Did you seek out any downer takes when evaluating Julia? What did you think of any of them? Some specific examples include https://yuri.is/not-julia/ or http://danluu.com/julialang/ where Yuri's post is linked at the bottom as a sort of update. If you read anything like that, have you been concerned? Any good "excuse" articles you found that address any downer points?
Anyway, thanks for your work in Lisp Land, both in code and IRC messages.
1) Julia will be my primary programming language for the foreseeable future. However, I am invested in a large CL project that has been on hiatus for a couple years, that will be starting back up soon. I will however only be working on that during our monthly meetings. I don't see new projects being developed in CL, though.
2) The Vim plugins only support SLIME, not Sly, which offers many more features I simply can't live without, such as stickers. Additionally, indentation in Vim cannot be made dynamic. That is, for every macro you write, you are forced to edit a flat file that describes the proper indentation rules. I have a couple thousand lines of my Vim configuration dedicated to working in CL, and it still isn't good enough, compared to the experience in Emacs. And yes, there are countless bugs that don't exist in Emacs/Sly.
3) Yes, I read that post and some others. Bugs like the AbstractArray usage are programmer errors, nothing to do with the language. Julia is 1-indexed by default, but with libraries such as OffsetArrays.jl and CircularArrays.jl an AbstractArray can be indexed at 0. I shrugged most of this post off, because I don't see them as correctness bugs in the language proper. As for the TTFX (time to first plot/execution) as noted in the other article, that is a problem that is actively being worked on currently by Tim Holy, and it's not really an issue if you build your own image anyway, only when you are compiling the code each time you start up your fresh image.
Do you worry about correctness flaws in Julia that make it hard to write robust and correct code when you get deep into the weeds while developing libraries meant to be used by others or meant to interoperate with other libraries? It is not a problem with Julia but seems to be a problem in Julia's library ecosystem. It bothers me. Does it bother you?
Would you prefer to add correctness tests to the compositionality that Julia gives “for free” due to multiple dispatch, or would you rather rewrite huge libraries from scratch? Eg: if you want a “new” feature like autodiff on numpy code then you need to reimplement numpy on top of every new autodiff implementation (PyTorch, jax, etc.)
I’m also curious to understand the reasoning, for either choice.
> Would you prefer to add correctness tests to the compositionality that Julia gives “for free” due to multiple dispatch, or would you rather rewrite huge libraries from scratch?
This is a hypothetical future choice, and in that case I would absolutely choose the former (and I hope we get there).
But the current choice is more like "composability that often isn't tested as generically as it should be, with no interfaces at the language level, even the abstract types used for the composition often not having clear specifications, all of it holding together only because of people fixing things organically as problems arise". Then the choice isn't as clear cut - choosing to go with the more limiting option where things are known to work together well with each other is a respectable, sane choice.
If we're going to claim composability as a strength of the language, it shouldn't come with hidden traps, caveats like "composability but really only if the hidden assumptions on both sides happen to be satisfied".
Language level interfaces would be great, but they could also easily have the same problem Abstract supertypes currently have, of being poorly defined semantically, and hence being less useful than they could be. The main change needs to happen in the community, in terms of
* more formal specifications for Abstract types your package defines
--- including in the language itself; I understand there's recently efforts to define what an `AbstractString` is, exactly, and there needs to be a lot more of that
* tests that test the limits of this specification by defining new types that are very different from existing ones, but conform to the specification (rather than the current usual practice of just testing with already existing types only)
* More integration of things like OffsetArrays, DimensionalData, AxisArrays, etc. in test suites
* An explicit "Scope and Limitations" page in package docs (like Revise.jl's Limitations page [1]) that mention what has been tested and what hasn't/isn't meant to be supported
I'm sure there's many other low hanging fruits that the community could address by itself - it's pretty late here and these are what my brain could come up with right now. But I see this largely as a community best practices and culture problem, that language features can help with to some extent, but needs to be solved at the social level.
Is the Common Lisp community really still so dysfunctional? I notice your link there dates back to 2006. Surely communities can improve over 15 years..
I would say it has improved, but it very much hurts me to watch some conversations. Lots of passive aggressiveness and stubborn people stuck with these old habits.
im only an observer on redit (lisp and common lisp) and the community seems to be quite good and supportive. there are a few members who, while very helpful, can be somewhat terse, but thats only a minority. the latter might be a bit of a turn off to some people but ive personally grown above being easily offended. overall i would say its an enjoyable community with a lot of new prominent members doing some exciting things
The sense that the user community is toxic arises from two specific individuals, one in the early days of lisp, at MIT, who was so toxic that they created a whole email list (as it was in those days) whose name was explicitly “—-without-[ahole]—-“, and one more recently who was so toxic that he single-handedly drove everyone off comp.lang.lisp. The StackOverflow lisp community, such as it is, is much less toxic.
Yes, we’ll, it was a real problem, but it was really just caused by a tiny number of people, almost entirely two. One really bad apple ... and all that — two in this case, but separated by many years, so the feeling had barely calmed down before another one arose. It could have happened in any community. For some reason it happened in Lisp. Shrug. Also, you need to take into account that Lisp has been in active use for over half a century!! A couple aholes in 60 years isn’t that bad a record.
I used Clojure years ago and I did not like it. I don't have much more to say other than it tries to force me into a particular programming paradigm and isn't very expressive.
Of course you lose a significantly strong functionality but so you do moving elsewhere.
>are multiple implementations of this ANSI standard, each with their own set of features.
Can just target a single implementation. Though, similar to previous, this kinda kills the point of having a standard and multiple implementations.
Some code examples on the CLOS section and how it compares to Julia's approach will be nice and will make the points made easier to understand. The rest seem to be indeed cultural issues rather technical which for a language to be considered is also important.
This really depends. If someone is trying to operate with small user defined generic functions on a large array of user defined defined classes in Common Lisp in a tight loop (i.e. what Michael was doing) then performance is very very hard to recover.
Of course, performance claims about Common Lisp are very slippery because one can always say "well, __ implementation using ___ library can avoid ____ specific performance problem", but I think it's fair to say that in general, generic code and CLOS features can land you in very serious performance trouble quite quickly, especially if you're trying to make reasonably portable code.
in general if you are concerned with performance you will want the ability to "touch metal", which means digging into implementation-specific compiler details. this is something that sbcl is very good at providing
portable code is appropriate when the whole language can benefit from your library
technically both the parser and the lowering stage are currently in lisp. The parser has been ported, but the lowering is more complicated and has not yet been re-written.
My C code from last century fails to compile due to changes in "more secure function definitions".
C++ changes "the standard" every couple years. When someone tells me they "know" C++ I have to ask which "standard" they "know".
Common Lisp code I wrote in the last century runs unchanged today.
So the complaint about Common Lisp being stuck in the past is actually a feature, not a bug.
When Julia settles on a standard (and only one standard) then it will be worth the learning investment. When that happens I can write code that will run without change in 2070.
So learning a programming language would be worth the investment only if that language has one unchanging standard? That feels to me like a bit of a hot take. Almost everything else changes in the world, often by necessity, so it's unclear to me that it's worth ossifying implementations so that C code from 1990 easily compiles in 2022.
A large portion of my (non-lisp) time is taken up because someone changed a language. What used to compile no longer does. Take, for example, Python. A large codebase in 2.7 needs to be ported to Python3 because a customer has a Python3 library. The original problem now needs to be re-coded by people who did not write the original code, achieving nothing of value. Of course, some of the Python 2.7 libraries need to be re-written without access to the source.
The goal for a 'standard' is not ossifying a language. Don't pretend that Python3 is in any way related to Python. Call it Snake or something else. Then 'python' code continues to work and 'snake' continues to work.
The confusion, especially evident in C++, is that 'C++' is at least half-a-dozen languages related by chance. When asked if you know C++, you really should reply with the 'standard' that you claim to know. There are 6 C++ "standards" that I'm aware of. I understand there is a 7th due to arrive in 2023. I wonder if 'concepts' will be in the new 'standard'. If so, do you know what it means to inherit 'up the add chain' versus regular inheritance? If not, can you still claim to 'know C++'?
It takes a LONG time to get "command of a language". Reading other people's code is a great way to see how much you don't know. For example, did you know that in C++ you can dynamically create a class and an instance IN THE ARGUMENT LIST OF A FUNCTION? It can take years to really "know" a language, much longer than the change time of languages.
I've not been reading good reports about Julia startup times and base memory use for simple programs.
In Common Lisp you can make a program that starts in milliseconds, from scratch; no Lisp stuff in RAM, other than your OS's buffer/file caches being warm. The same was true twenty years ago or more already.
Yeah, Julia doesn't work well for short-lived programs. And it's fine. It wasn't meant to be the language that does everything. (Though you can force it, e.g. utilizing a sysimage.)
I use both CL and more recently (obviously!) Julia exclusively and extensively. Julia is almost an acceptable lisp. Except that’s it’s not homoiconic, and they f’ing mark the macros (and a couple other minor nits). So Julia can’t really be used for the primary use case of lisp, which is DSLs. So now I do my DSL-like work in lisp, and scientific computing in Julia (or lisp with a sucky ffi to python that I rolled myself — hmmm, I guess I should rewrite that to go to Julia —- hmmm) Anyway, yeah, Julia is an almost acceptable lisp. Like, close, but missed the mark (yet again, but as close as I’ve seen). Maybe if I fix the ffi as above, I won’t have to actually see the ugly Julia non-homoiconic mess, and the hemispheres will no longer be fundamentally at odds.
> There is no built-in package manager for Common Lisp.
This was appalling to me coming to CL from languages with a lot of control over dependencies. It's essential that I can pin versions and ensure that I am getting the correct source code.
However lately I have been playing around with Guix System and now I'm wondering if the Guix package manager is the "missing" killer package manager for CL libraries (or for most any language).
The post doesn't mention other package managers: Ultralisp is a Quicklisp distribution that ships every 5 minutes (but it doesn't check that all packages load correctly together), Qlot is used for project-local dependencies, where you pin each one precisely, CLPM is a new package manager that fixes some (all?) Quicklisp limitations (whose limitations you may not encounter after a few years of happy use: it is very slick).
Hello, vindarel. Thanks for your reply! I've actually been working through your Common Lisp course on Udemy. Thanks very much for that too! Sadly it has been slow because I have a lack of free time. I have been enjoying a bit of a programming Renaissance within the last year or two after writing in imperative languages for decades. I switched from Vim to Emacs (with evil), got into Emacs Lisp, then on down the rabbit hole with CL, Scheme, etc.. Good times.
Anyway, clearly I have more to explore with the CL ecosystem, but the point of my comment was really that I am surprised that a language as old (or venerable) as Common Lisp doesn't seem to have already sorted out strong package versioning and cryptographically verifiable dependencies. The fact that using HTTPS is sort of new is concerning. Maybe the language just comes from more trustworthy days in general, but my paranoia has problems with that these days. I see CLPM has a "beta" warning, mostly one author, and 14 stars (for what that's worth). It seems there is still a lot of work to be done in this area.
Despite all that, I would recommend to anyone to try CL for all its other advantages. Other tools (such as my suggestion with Guix) might be leveraged to make up for any shortcomings until good "native" solutions are sorted out (assuming I'm not completely misunderstanding the state of affairs).
So, the article is harsh on CL: YMMV. Also, your goal may vary: I want to build and ship (web) applications, and so far Julia doesn't look attractive to me (at all). Super fast incremental development, build a standalone binary and deploy on my VPS or ship an Electron window? done. Problem(s) solved, let's focus on my app please.
The author doesn't mention a few helpful things:
- editor support: https://lispcookbook.github.io/cl-cookbook/editor-support.ht... Emacs is first class, Portacle is an Emacs easy to install (3 clicks), Vim, Atom support is (was?) very good, Sublime Text seems good (it has an interactive debugger with stack frame inspection), VSCode sees good work underway, the Alive extension is new, usable but hard to install yet, LispWorks is proprietary and is more like Smalltalk, with many graphical windows to inspect your running application, Geany has simple and experimental support, Eclipse has basic support, Lem is a general purpose editor written in CL, it is Emacs-like and poorely documented :( we have Jupyter notebooks and simpler terminal-based interactive REPLs: cl-repl is like ipython.
So, one could complain five years ago easily about the lack of editor support, know your complaint should be more evolved than a Emacs/Vim dichotomy.
- package managers: Quicklisp is great, very slick and the ecosystem is very stable. When/if you encounter its limitations, you can use: Ultralisp, a Quicklisp distribution that ships every 5 minutes (but it doesn't check that all packages load correctly together), Qlot is used for project-local dependencies, where you pin each one precisely, CLPM is a new package manager that fixes some (all?) Quicklisp limitations
> [unicode, threading, GC…] All of these features are left to be implemented by third-party libraries
this leads to think that no implementation implements unicode or threading support O_o
> The fact that Common Lisp is a standard is both a blessing and a curse. Many developers consider this to be the former, as your code is much less likely to break over time. For others, it means that the language is frozen in time.
This is not a consequence of it being a standard, rather no one has bothered of creating a new revision of the standard, like has been happening with Ada, C, C++, JavaScript,....for the last decades.
Same thing happened with Standard ML, if I’m not mistaken, with regards to the standard not evolving. It’s a lovely little language that could have been easily pruned with some more modern sensibilities.
almost all bits that make python famous live outside the standard library. moreover nothing in common lisp prevents one from extending the language. in fact quite the opposite, common lisp gives you enormous flexibility to do just this. HOWEVER, what is required in practice is that your library should be popular enough for it to become defacto standard. things like this already exist [1], but common lisp community is quite small so major libraries are nowhere near as common as in other languages. still though, given its size, some exellent projects are under way
However that is orthogonal to the language standard.
As for the standard library, and the usual bashing Python batteries get, the great thing about being in the standard library regardless how obtuse it might be, is the guarantee it exists everywhere there is a compliant implementation.
im not bashing python. im simply saying that there is nothing stopping extension of common lisp through libraries (see coalton as a recent example of this in cl) as happens in python (eg numpy). that cl is ansi standardized is only a plus because it gurantees to me that massive portions of the language (the whole standard) will allways remain the same
Unfortunately, Julia has a number of correctness flaws [0]. Just based on this alone, I can't use Julia simply because I can never be sure whether my code is wrong or the compiler itself is wrong. In scientific computing and machine learning, these problems are very important, unlike in other types of programs where it's more tolerable, because they deal with vectors and tensors with potentially billions of parameters and computation/training time might take several days. If I get an incorrect result, my time has just been wasted, not to mention money via compute resources.
Can you point to a compiler without correctness flaws[1]? If you use Fortran, or C++ your compiler will also have correctness flaws. Is the claim that bugs are more prevalent or fixed more slowly than with other compilers? Yuri kind of implied that in the article you link, but we would have to take his word for it.
The claim, as I understood it, is not that other compilers provably have no bugs, but that maintainers (and the whole community) react differently when one is found.
If you demonstrate a bug in gcc or rustc (or one of their base libraries), it is a major issue that will at least get documentation and a warning (and likely a quick fix). In Julia, even documenting it may be relegated to the middle of the todo list.
I think that's a reasonable take for a usecase like ML. In most code, >99% of your bugs will be the one you write yourself, and debugging the few bugs from Julia itself is really not much harder than debugging your own, in my experience.
Of course it's annoying that Julia itself is this buggy, but the problem is sometimes framed as if it causes some fundamental uncertainty and doubt about your program because the compiler can't possibly be debugged, whereas is my experience, when you see a Julia bug, it's just another bug, usually in some bog-standard stdlib function which you can easily inspect using code introspection.
And again, I've maybe seen 20 Julia bugs in 4 years of coding it daily, compared to probably 20,000 of my own bugs.
Besides your points, i've found Julia to have deceptive marketing about speed. I've found it to be super slow, mainly when writing expressive code. So then it boiled down to writing ugly code which turned out to be a little less slow.
I don't buy "it's compiling at run time" argument, since other (interpreted) languages do not have this problem.
Also there were a lot of inconsistencies mostly when trying to use (or abuse, who know?) broadcasting.
Did it improve somehow since last year? I hope so.
It is quite likely that you have fallen afoul of some of the standard performance 'gotchas', like non-const globals or inadvertently creating type instabilities. Have you consulted the Performance tips?: https://docs.julialang.org/en/v1/manual/performance-tips/
Efficient Julia code should normally look simple and elegant, not 'ugly' (unless you are going into deep optimizations, like manual simd, or some times heavy reliance on in-place operations.)
I think calling it 'deceptive' is problematic. Of course you can write slow code in Julia, like in any language, but did you write code that you think should be fast, but wasn't?
> I don't buy "it's compiling at run time" argument, since other (interpreted) languages do not have this problem.
I'm a bit confused by that statement. Compiled languages have to compile, interpreted don't. What did you mean here?
Almost all of those 'correctness flaws' are bugs in packages that weren't interfacing with each-other correctly. Everything has bugs and correctness problems, but it may be that Julia users run into them more often because they compose packages in novel ways very often.
But we also have very rigorous testing infrastructure and a community that cares deeply about these things and moves very fast to fix them. You'll notice that every legitimate issue in that post is now closed and fixed, and the remaining ones are about the users misusing in place functions in the presence of aliasing.
I think it's fair to point out that Julia makes it really hard to write robust, correct code.
There are a lot of different interacting reasons for this (it deserves a blogpost on its own), but here are a few:
* There are no interfaces
* Existing abstract types are mostly undocumented: It's unknowable and certainly untestable what constitutes an IO or a Number or even an AbstractArray (yes, even AbstractArray leaves important edgecases unspecified).
I've written 100 methods that takes `::IO`, yet I don't actually know what it can take. Much of the issues with unsupported package interactions come down to this: One package doesn't know what it promises, and the other one doesn't know whether the promise is upheld. E.g. it's still unclear to me whether `OffsetArrays` actually fulfill the contract of an AbstractArray. If not, it's a bug that it's an AbstractArray. If so, Base is insufficiently tested, as it ought to test its AbstractArray code with an AbstractArray with offset axes.
* Base Julia have several functions that are simply not tested. CodeCov of Base is far from 100%
* Iterators are assumed by Base and Iterators to be immutable - a buggy assumption in many contexts
* It's not even clear what is public and private in a package. E.g. are fields of exported struct private? Where is that documented? And it's way too easy to rely on unexported symbols.
* Speaking of which, you can export stuff that does not exist.
* Projects does not have compat entries by default
* Generic functions are rarely tested generically - i.e. not with any minimal abstract type.
* Promotion rules of non-numbers are unclear and underspecified, and accidentally changed recently on master because it's not documented nor tested anywhere
* There is a lot of "Yeah, X isn't really semantically correct, but I can exploit its weird behaviour in my own code, so we shouldn't fix it, it's actually a feature" hacker attitude among Julians.
There are like, 100 more small things that make Julia more bug-prone. I think this is a serious issue about the language that we should take note of and try to work on. You'll notice several of these issues can be resolved. But we need to take it seriously.
I'm starting to think that multiple independent implementations of a language are important for catching implementation quirks before they become entrenched.
> Almost all of those 'correctness flaws' are bugs in packages that weren't interfacing with each-other correctly. Everything has bugs and correctness problems, but it may be that Julia users run into them more often because they compose packages in novel ways very often.
I agree with this. Yuri's post requires a bit of Julia experience to understand the context of it, that much of it is about drop-in combinations of packages and types that one wouldn't even attempt in many other languages. Julia allows you to do that and makes it programmatically easy, which has significant benefits. But it also requires work on the implementers' side to get right.
> But we also have very rigorous testing infrastructure and a community that cares deeply about these things and moves very fast to fix them.
Not sure how much I agree with this. If you're talking about the language itself, sure. But a lot of the libraries don't have extensive testing infrastructures, especially not the kind Julia obviously needs based on the above, with beyond-the-expected-usecase tests. Maintanence of packages varies a lot, with the median being on the slower side. There's definitely "a community that cares deeply about these things", we just need more of it.
In Julia's defense, its ecosystem is nascent and it's not designed with a focus on guarding undefined behavior. So far the community has been really good at cataloging & fixing interop issues between packages. As time goes on I'm sure it will get better.
There's nothing in Common Lisp that would've prevented similar errors if its package library had been as big as Julia's. CLOS has even fewer guardrails than Julia's object system.
The issue regarding editors is a real one IMHO.
Common Lisp is too tied towards emacs. It can put, people who don't like Emacs, off. And while there are alternatives, but like the author said none of them provide as a complete CL dev env as Emacs.
But at least there is work done to perhaps remedy this issue.
Emacs is the main reason I gave up on CL.
I would think that none of the available options provide a complete integrated development environment as LispWorks and Allegro CL (which are commercial, but still they exist and are available) do.
The combination of SLIME/GNU Emacs and SBCL has some unique features (and is really a very useful combination), but the whole integrated development feeling one won't get from it. For example the LispWorks IDE is completely written in itself, is fully GUI based and is also in one application. Thus the cross-platform GUI system used to write the IDE is also available to the application developer. GNU Emacs OTOH is an external application, not written in Common Lisp and with a clunky user interface.
Then there is a path of older and mostly abandoned, but powerful and/or very usable user interfaces: examples are Xerox/Medley Interlisp (https://interlisp.org), Symbolics Genera, Macintosh Common Lisp and others.
Some questions about CL I have relating to my interests: Does Common Lisp support operator overloading? What's the situation with support for matrices and linear algebra? Has autodiff been implemented?
[edit] What about Computer Algebra systems? Can I just interface with Sagemath, Maxima or Sympy somehow?
1. CL supports multiple dispatch, but its built-in numerical operators are not extensible. There are two libraries, GENERIC-CL and CL-GENERIC-ARITHMETIC, that put wrappers over existing operators... but you have to use those wrappers.
2. Matrices and linear algebra has only rudimentary support in Lisp. It's better than nothing but it's worse than NumPy. MAGICL, Numericals, LISP-STAT, and NUMCL are four such libraries.
3. To my knowledge, minimal/toy implementations of auto diff exist in Common Lisp, but nothing big or serious. Scheme was actually a real hotbed for autodiff research, by Pearlmutter, Siskind, et al.
4. Two of the historically best computer algebra systems are written in Common Lisp: Maxima (derived from MACSYMA of the old days) and FriCAS (derived from AXIOM of the old days). These systems basically build a computer algebra programming language and engine on top of Lisp.
> It really is not that far off from Common Lisp if you squint really hard
This made me chuckle.
But this post did help me understand how Julia is Lisp done right in some specific ways, and also understand the appeal of Lisp as a programming language.
What would be awesome would be if Julia had an allowed alternate syntax that happened to look exactly like Lisp. Then the world really would be a better place. (Yeah yeah, see my comments elsewhere in this thread!)
Although I’m sure you’re right about that, to us lispers, your Algol-based languages are comma, bracket, braces, semicolon, sometimes-infix-sometimes-not, indentation, newline, end-statement spaghetti. :-)
I propose the word “gitiot” for an idiot who posts a “there oughta be…” without googling it first. (I can’t figure out how I missed this. I’ve been in a “lisp on the one hand and Julia on the other” bubble for years. It simply never occurred to me to look for this, and it never crossed my desktop. Gitiot!!)
To be fair to yourself, LispSyntax.jl is more of a proof of concept than something that anyone would want to actually do significant coding in as can be seen by the TODO list in the README[1]. It also hasn't seen any active development in several years.
Just tried it; sorta works; the most basic examples run, but the repl is broken; probably designed for an older version of Julia; I'll see what I can do with it. Anyway, thanks!
I had been intrigued by Lisp for years, but it was Julia's roots in Lisp that finally got me to explore the Lisp languages. I actually haven't used LispSyntax.jl, but it's close to what I'd like to have in a language.
Boiling down some of his points on the problems with common lisp that made him switch away from it:
1. Editor support. The original poster bemoans that the only fully supported editor is emacs and that there is not sufficient support for neovim. I have used emacs and slime, but I had to stop because of the ulnar tunnel syndrome it was causing. But I knew vim before I learned emacs and I am a neovimmer myself. I have used jpalardy's plugin[1] for many years and couldn't be happier with it. There's a visible repl built in, so by definition it's fully featured. The reason emacs is the only editor for lispers is that it tries to be the whole operating system. It turns out using a terminal multiplexer along with an editor gives you everything you need. I like GNU screen on Linux and ConEmu on windows.
2. Community. The original poster speaks of toxicity and a community full of people who are not welcoming to newcomers who ask simple questions. I'm just barely starting my Common Lisp journey so I can't speak to that. But I don't know if I'm discouraged by a community that values members who are capable of doing their own research and not needing their hands held. I'm often surprised by developers who need YouTube videos to explain to them how to do their job as if they can't read their own code. Maybe that's why the documentation is so poor in the Common Lisp community: the expectation is that you can read their code and figure out what it does. The original poster says that the community is full of people who do not wish to work with others. That makes me feel like if I write in this language I'll be able to be productive even if no one else wants to work with me. I'm not hugely popular and don't have a ton of stars on my GitHub pages, so the idea of a language that allows me to be productive even if no one else wants to help sounds pretty good to me.
3. Packaging. The original poster speaks against packagers who take responsibility for the cleanliness of code before it gets packaged up for the quicklisp dist. He complains that packages are released on a monthly basis and that this is not fast enough for him. As a DevOps engineer, I think this is fantastic. I hate it when developers release code too frequently because the new releases often break things even when they are not intended to, and reacting to those changes takes time. A little bit of time before each release is easier on your consumers. I feel like the best sweet spot is 90 days. I'm getting killed at work right now because of the breakneck speed of helm chart packages and how fast they release completely breaking changes. I also consider that the absolute best packaging community on this planet is Fedora and Debian operating system packages. The idea of having a separate packager from the actual software making sure that the software lands well and plays nice with its neighbors is a huge feature of those communities. Those operating systems wouldn't be possible without them. The original poster also complains that there is not any versioning. If this is the case it is very sad to me and I hope that somebody can refute this claim.
1) There is much more to the interactive development experience than the REPL. As mentioned in the article, the debugger and inspector are key parts of this workflow.
3) It's not that software is released too far apart, it's that the release of software is out of the hands of developers, and packaged by a third party with dependencies that may not even be API compatible with the developer's software. Since there is no versioning dependency management for Quicklisp to leverage, all it can do is try to build your software in isolation, not check compatibility with dependencies, and certainly not runtime compatibility.
> 3) It's not that software is released too far apart, it's that the release of software is out of the hands of developers, and packaged by a third party with dependencies that may not even be API compatible with the developer's software. Since there is no versioning dependency management for Quicklisp to leverage, all it can do is try to build your software in isolation, not check compatibility with dependencies, and certainly not runtime compatibility.
This is what I like about quicklisp. IMO other package managers don't do it this way only because it doesn't scale; pushing packaging and releasing to individual developers does.
Also, building the software "in isolation" checks at least compile-time compatibility with dependencies. IIRC someone (not Xach I think?) also runs the system tests for each system in quicklisp, which will check runtime compatibility to the extent that the tests do so.
In any case, I have literally never run into an issue with transitive dependencies not working in QL; not sure if I'm lucky or what.
> As mentioned in the article, the debugger and inspector are key parts of this workflow.
Which you can use from the commandline inside of the multiplexor. Like I said it's fully featured because it's just you firing up Steel Bank. The debugger and inspector are still there.
> Since there is no versioning dependency management for Quicklisp to leverage, all it can do is try to build your software in isolation, not check compatibility with dependencies, and certainly not runtime compatibility.
The lack of versioning and dependency management is pretty dumb I'm not going to lie. I didn't expect that from a package manager.
> It's not that software is released too far apart, it's that the release of software is out of the hands of developers, and packaged by a third party with dependencies that may not even be API compatible with the developer's software.
Perhaps there are some cases where this is a problem, but in general, I firmly believe that having separate packagers is a good thing[1]. It can be annoying at times, but much more comfortable for the end user.
the problem with this point of view is that for packages under active development, it creates an awful user experience. running into an issue that was fixed 2 years ago and you can't fix is incredibly annoying.
If we're talking about server operating systems, yes, I commiserate with you. But the same principle is applied with Arch Linux, and I've never felt more joy interacting with a packaging system (or Linux community for that matter). Everything is packaged and up-to-date (again, sometimes too up to date). I use Fedora at home and it's at most months behind, but certainly not years, and usually strikes that sweet spot between stability and annoyance.
As someone using an Arch variant, I can say that the newness of the packages isn't really a problem. The real problem is that there's no package compatability bounds, so even if a new version of a package is known to be incompatible with something you have installed, it'll happily upgrade that package anyways. Plus, there's no easy way to request an old version of a package.
I just wish I could just Julia's package manager as my Linux Distro.
a common problem is some misplaced parenthesis which you can see easily if you use an editor which auto-indents the code. Encouraging newcomers to format code and learn how to have it done is good advice; it's the first survival skill.
I think the Lisp Discord server is also better at this sort of thing. I've seen a few requests (not walls) or tips for better formatting in the #beginner-questions room but they're pretty polite, and at least one person with really messed up formatting initially improved and hasn't been chased off so... Despite the qualms I have with Discord itself, I also think it's just naturally more jovial than IRC in large part thanks to the interface. You can paste code blocks with basic syntax highlighting (don't need to farm out to pastebin (or risk people yelling at you for flooding an IRC room)), there are direct replies that can make parallel conversations easier to track, and the recent threading feature gets a bit of use, and separate but easy to see and reach channels for topic separation keep things better organized, there's easily searchable history...
https://github.com/jpalardy/vim-slime is a terrible SLIME to be honest! Wait! It is not even a SLIME. It just copies text from one text buffer and pastes it to another Vim buffer which is probably running a REPL. "Probably" because who knows what the target buffer is running. vim-slime does not care. This is not Superior Lisp Interaction Mode for $EDITOR (SLIME) in any way.
vim-slime does not connect to any Swank server. It does not understanding Lisp s-expressions. It would happily copies any random text into any random REPL and call it job done! Lisp interaction mode is much much more than just copying and pasting text around. A superior lisp interaction mode gives you live debugging, handling conditions, inspecting variables, navigating the stack frames, ... Vim-slime cannot do anything like this because, well, it is not SLIME! It just copy-pastes stuff around. Vim-slime is a disingenious and misleading name for a project that is not SLIME.
If you really want to use Vim, do yourself a favor and use https://github.com/kovisoft/slimv and experience a true Lisp interaction mode. It contains an actual Swank server and an actual Swank client that connects to the Swank server to provide an actual Lisp interaction mode in Vim just like SLIME does in Emacs!
All that stuff still exists, it just exists in the multiplexer instead of the editor. That's my point. I don't need it in the editor. I just need it.
Even better! It's not useful for just Lisp! It works for python, bash, whatever. That's not a bug, that's a feature. It's simple, lets the editor be the editor and the multiplexer be the multiplexer. It's much more "vim zen" than importing an entire repl process into the editor.
I've always known about slimv, I just like vim-slime better. My only complaint is that when it copies and paste texts around it's slower using interprocess communication than it is using TCP. But the nice thing I can see exactly what's happening instead of having it all hid from me. Makes debugging problems much easier.
Even describing slimv as a "mode" belies its emacs-ness. Vim doesn't have major and minor modes. It has insert mode and normal mode, that's it.
I don't even use fugitive. Why do it when the git CLI is faster? I've already learned the cli, why do I have to learn an entirely new set of commands to do the same things I already know how to do? Especially when I'm using vim inside GNU screen? C-a 2, boom, I'm in a CLI. Switching back and forth is a breeze. I have a few key bindings in vim that allow me to see the git blame line of a particular line, that's it.
Don't turn vim into a user interface for $x. Embed vim into other user interfaces. Vim plugin for intellij, vscode, vim inside a multiplexer, ... This works much better in my opinion.
Thanks for the reply. I do have some questions to understand how far vim-slime can go in providing a good lisp interaction environment. For instance,
1. Can I select an s-expression (along with all the nested s-expressions within it) and send it to the REPL? When I checked vim-slime I saw that it has no understanding of s-expressions. The onus of carefully selecting the s-expression and sending it to REPL relied on the user. If I selected what amounts to nonsense and send it to the REPL, vim-slime sends that nonsense for evaluation. vim-slime did not seem to help with these things. Is that your experience too?
2. Does vim-slime use the compiled functions for autocompletion/suggestions? SLIME and I think SLIMV too helps with autocompletion, shows the valid arguments of a function automatically in the status line, and such things as I type code. SLIME/SLIMV make use of the compiled functions to provide IDE like completion/suggestion features. Can vim-slime do this?
SLIMV or real SLIME do have key-bindings to send any s-expression the cursor is on, or send a top level form automatically and send it to the REPL. Seeing the valid arguments in the status line as I type code is useful for functions where it is easy to forget which argument goes first and which goes second. Just some of the many small features I rely on while using the real SLIME with Emacs. Can vim-slime do these too? When I checked vim-slime it could not do these things and it could not do a bunch of other things that I consider essential for lisp interaction.
Sorry for all the edits, just want to write all my thoughts down in one place.
1. Various interactions I frequently have:
- `va(C-cC-c`. Select the current parentheses text object (s-expression) and send it to the repl.
-`C-aC-a`, switch to repl to view output, interact with debugger, etc.
- `ggVGC-cC-c`, select the entire file and send it to the repl.
- I've never done this one but since you mentioned it as useful to you: `?^(<CR>va(C-cC-c`, evaluate a top level form. Not perfect, but probably good enough especially if a line formatter is used to enforce sane indenting like this one for Clojure[6]. If a line formatter does not exist, the usual `ggVG=` built-in vim indenting works just fine.
- `:set makeprg=<command to run unit tests with line numbers>:make`. Batch error reporting and fixing, very useful[1]. In Clojure, e.g. `lein test` with some additions to set error format might be the `make` program.
vim-slime does not have an understanding of s expressions but vim itself does, using the parentheses object. See `:help text-objects`. I like that vim-slime does not have this understanding because what if I'm using it with Ruby or something. But vim support of text objects is good enough that it's never a problem. While I'm on the subject, `])` and `[(` searching for parens in vim is awesome.
Vim operates using an entirely different set of values than emacs. Selecting and searching for text should not be the purview of a plug-in, it should be built into the editor. Compiling code should be delegated to compilers. Vim does a great job of this kind of separation of concerns, it is the Zen of vim, its core philosophy. Emacs is great and it works fantastically, it just operates using a different philosophy.
2. I work more in Clojure but I'm trying to get more into common lisp. Clojure has suggestions in neovim via language server protocol[2]. The suggestions are very good[3]. This is not as developed for the common lisp story[4] but I'll try it out and see how good it is.
I've never had much patience for autocompletion. I don't even like it when the editor puts in pairs of parentheses when I've only typed the first left parentheses. Vim does have autocomplete through the language server protocol, but I don't use it.
Regarding suggestions as I type, there's ale[5], but I think I would find suggestions as I type a little too distracting and might slow me down. I find the normal LSP stuff to be good enough.
1. Is the biggest deal to me and I would definitely not use a language if it was somehow incompatible with the way I liked to edit. For me its the other way around though since I like to live within emacs.
Even though I liked the language and would have liked to use it more, I did abandon Pharo because I just couldn't imagine ever liking having to use their IDE which depends too much on the mouse. That's the only time I have done that since emacs usually handles most languages just fine.
Julia is excellent. There are a lot things with Julia (think multiple dispatch) that once you learn them you can't stop noticing that other languages are doing it wrong
I don't understand why people keep talking about multiple dispatch like Julia invented it. You can do that in many other languages, even languages designed for numeric computation. What's cool about Julia is that it has brought 90s compiler technology to scientific computing: a field which still thinks MATLAB is a really good way to develop and communicate scientific ideas.
I think a lot of the reason is that most previous incarnations of multiple dispatch give you slow multiple dispatch, and a different way to define functions that don't have multiple dispatch and are faster to call. As such MD tends not to be commonly used in languages where it exists. Julia isn't the first to have multiple dispatch, but it is the first where everything is multiple dispatch. The result of this is that we put in a ton of work to make multiple dispatch fast, and all the APIs are designed around it, which gives a very different feel to the language.
As I noted around a year ago, there's a little puzzle here: in principle Julia's approach was available in the 90s, when decent JIT technology was becoming available. So why was MD given only toy-like treatment before Julia? (MATLAB did get a JIT in 2002, so it's not really a toy, but the performance was not good enough to prove the concept)
My impression is that the complained-about inflexible division between concrete and abstract types was pretty necessary to achieving good results in practice, but this division is alien to the expressiveness-loving lisp culture.
MATLAB is fairly impressive for development. It works ok for communication. What are the areas for improvement you have in mind, and what are the alternatives I should consider?
The 2 biggest problems with Matlab are price and lack of community. The price is an issue even if you can afford it because it makes it really hard to deploy widely since anyone who wants to run the code also needs to be paying Mathworks. This closely ties in to the community issue. Almost all Matlab libraries are proprietary (and written by MathWorks). If you find a bug, your only option is to file a report and wait 3 years for it to not get fixed. In an open source ecosystem, you can dig into the code and fix the problem yourself if you need to.
I once got on a call with MATLAB compiler engineer and had him fix a bug in an afternoon. I should have written a blog post about it but it is too late, I have very little recollection of the details. It was kinda awesome though. How often can you do just call up someone to fix the compiler. To be fair, I worked for a very large Fortune 50 company and had over 10k licenses of MATLAB.
Also, it is interesting that outside of SV and HN crowd, we thought MATLAB is awesome. We had all the toolboxes, there is a gazillion of them. Even obscure RF related stuff. You just can't find a library for something like a phased array analysis. May be you can, but it won't be as high quality and industry proven as this: https://www.mathworks.com/products/phased-array.html
Another topic that we often understood is that when a jet engine is hoisted up for testing and you need to get something fixed or have a question about in next 2 hours, you need commercial software support to help you out. Gravity of situation cannot be understated. It is stressful. There is OSS software support, but MATLAB is on another level. Absolutely outstanding support. Companies like Apple and SpaceX rely on MATLAB heavily outside of software engineer orgs.
If you are basically using finished code, your own code is mostly a top-level script, and you mainly need polished functionality and professional support, then Matlab toolboxes and support are very good.
But the language itself is primitive compared to OSS alternatives, so when you need to develop your own software on a larger scale, it falls short.
There is of course a sliding scale between 'user' and 'developer' in any language, but I think that the closer you are to the 'user' end of the scale, the better Matlab looks.
I don't find the language primitive at all. It just has a poor deployment story that consists of "autogenerate to hidious unreadable c c++ cuda or vhdl, or writ it again yourself". It is amazing the autogen even works, which it really does, but that code...
But back to the language. I can typically do a line by line syntax change to get pytorch (julia is a but different but not too much). The resulting matlab sometimes runs faster too. If you avoid globals and write everything vectorized it is all really clean. OO code would be hidious, but I try not to use OO or goto in any language unless there is no other option. I like that arguments to a function are pass by reference unless you write to them, in which case it does a smart copy. Julia syntax looks different sometimes because you don't have to vectorize for performance as in matlab or python, in fact sometimes you shouldn't.
All in all, if the deployment story was solved, I probably wouldn't be trying out Julia. In fact I still prototype in Matlab before implementation in pytorch or julia, its just eaier to get that first thing working.
Everything is a matrix, that's horrific (especially the Nx1 vs 1xN ambiguity which pops up all the time). No default argument values (and until recently, no support for keyword args, though they have a bad version now), meaning half your code lines are input parsing. Forced vectorization does the opposite of making code clean, it makes complicated code a lovecraftian mess (unlike Julia, which lets you vectorize efficiently at the top level.) Every function must live in an m-file(#$@&%*!) Everything in your path is in scope :(
The varargin/varargout/nargout mess, with outputs specified in the signature line, instead of proper return statements.
Also, for a language that requires vectorization for performance, there should really be a proper `map`, instead of the mess that is `arrayfun`/`cellfun`/`structfun`. Their arrays are super-limited, no mixed-element arrays (use cell-arrays!), so `[3, [4,5,6], 7]` is just concatenated, while `[3, [4;5;6], 7]` errors (and check out what `[3, "hello", 7]` does(!) or `[3, 'hello', 7]`)
Poor support for integers, 2 is a double(!), and `int8(3)/int8(2) == int8(2)` (yikes.)
Their OOP is actually not that bad, though it's slow. And their graphics system is pretty ok.
Also annoying: now you have two types of strings, old-fashioned 'abc', and new-fangled "abc", which are very different, and sort-of, half-way work together. Though I think moving to the new strings is actually a good move, but painful now.
I want Nx1 and 1xN to act differently. It is a form of type safety. I would be fine with defaulting to Nx1x1...x1 as well and have everything be a tensor where you specify which side all the singleton dimensions proceed. Disagree, but they could do this better, matrix just happens to cover most cases.
I consder default arguments code smell in every language. Or really more foot-guns than code smell. I discourage them whenever anyone will listen. Disagree on this one.
Not super fond of kwargs either but they certainly have their place. If they are going to support it, they should do so well. Agreed
I actually prefer to read vectorized notation, I wish it was more consistently performant in Julia, sometimes the for loops run faster, but they take longer for me to read and understand. The exception is if there is an einsum in there somewhere, or the equivalent auto expansions in Matlab, that takes me a few. Personal preference I guess?
I've never come across a use for mixed arrays, but everytime I come across an api that returns them I begin cursing. Agreed
Typecast rounds instead of floors? That is kinda odd, but not wrong I guess? I haven't ever run across this because I use floor or round explicitly.
In 20 years ivrmever noticed the string thing. I'll jave to read about that, thanks!
Yes. Of course you do. The problem is, you never know what you are going to get. What you actually want is a vector, but Matlab has no such thing, so then you have to write your code in a way that anticipates either Nx1 or 1xN, and handle both. Sounds, simple enough, but I have a lot of code lines dedicated to checking and handling row/column orientation. If only there were real 1D vectors!
> I consder default arguments code smell in every language. Or really more foot-guns than code smell.
I don't really understand what you mean here, but the problem is that in Matlab you handle 'default arg values' with `if nargin < 5, par5 = default_value` etc. It's just worse, and it's perfectly idiomatic matlab.
> I actually prefer to read vectorized notation
My argument is actually that vectorization works much more cleanly in Julia. If in Matlab you have `foo`, which calls `bar` which calls `baz`, etc, then you must make sure that each of these explicitly can handle array inputs. You have to think about arrays on every level, including what happens with axis broadcasting (and probably checking 1xN vs Nx1 orientations on multiple levels). In Julia, on the other hand, you can write your functions `foo`, `bar` and `baz` to handle scalar arguments, and then you vectorize the whole thing with `foo.(args)`.
So vectorized code in Julia is much simpler to write, and also simpler to read. (Just to be clear: writing vectorized Matlab code and vectorized Julia code are things I do all day, every day, so I have a decent basis for comparison.)
> sometimes the for loops run faster, but they take longer for me to read and understand.
Loops vs broadcasting should be basically the same for performance, but sometimes you can get extra performance from a loop, by exploiting algorithmic advantages. But a simple vectorization is just a dot away.
> mixed arrays
It's something one generally tries to avoid, but they are often necessary for passing along arguments to inner functions etc. Tuples are great for this, but it doesn't exist in Matlab.
> Typecast rounds instead of floors?
It's odd, but I don't mind. Mainly, I am annoyed that integers are not well supported in Matlab.
> In 20 years ivrmever noticed the string thing
The "strings" are new, a couple of versions back. Of course, you will notice that "string" is actually a 1x1 matrix of strings, so length("hello") equals 1. And indexing into strings is actually indexing into the array of strings. So `str = "hello"` then `str(1)` returns the string itself, and `str(2)` errors.
I don't want 1d vectors to exist, and ideally, I'd want everything to be infinite rank tensors with som clean notation for singletons x N or N x singletons or NxMx singletons etc. The dimension mismatch has caught more errors for me than I could ever possibly count. My functions don't check to see if something is scalar or columnar, I let the error tell the user their mistake. And using a column vector when a row vector is expected is a math error. And the same function usually works for scalar or vector or matrix for the most part.
I do like the dot notation and Julia in general, but I just don't have many complaints about Matlab. Auto expansion maybe. They do allow helper functions in the same file now, finally.
Well, in many cases, dimension mismatch causes errors, not just lets you find them. And in fact, with the recently-ish introduced broadcasting behavior, you don't even get an error, just surprising dimensional-increasing behavior.
As for "math error", it often is nothing to do with that, you just want a list of values, what should such a list be? Column or row? And often, I'm the user, calling into other people's code that sometimes wants columns, sometimes rows, and behaves surprisingly different for each (sometimes with errors, sometimes not.)
There should be a convention in Matlab about whether fundamentally 1d structures should be columns or rows, but there isn't. (Just for example: reductions work down columns, except when the columns are length-1, while iteration only occurs along rows...)
This is a huge, huge problem in my daily work. I can give many examples, but am on my phone now.
Let me put it like this: if I could change only one single thing about Matlab, I would introduce a proper vector.
> Also, it is interesting that outside of SV and HN crowd, we thought MATLAB is awesome. We had all the toolboxes, there is a gazillion of them. Even obscure RF related stuff. You just can't find a library for something like a phased array analysis. May be you can, but it won't be as high quality and industry proven as this: https://www.mathworks.com/products/phased-array.html
As a matter of fact I find most comments about Matlab here supportive. I actually do find Matlab to be a polished product, specially in the IDE and documentation part. In the end you use whatever saves you time, that's why most people that use Python have chosen Python: because of the libraries. I have personally found Matlab toolboxes quite simplistic for my own purposes. Most people, myself included, actually trust more that SW is used and deployed in the millions than any kind of support.
>How often can you do just call up someone to fix the compiler. To be fair, I worked for a very large Fortune 50 company and had over 10k licenses of MATLAB.
Very often if you're paying upwards of $10,000,000 in licenses a month.
The community issue also means that code sharing is very limited compared to other languages. If it isn't in one of the Mathworks toolboxes (that you can afford), you'll probably have to end up at the Mathworks "File Exchange" site, where you take a look at the existing code that "solves your problem" (in a non-generic, buggy way), cry your eyes out, and end up implementing it yourself.
Both fair points. And the alternative? I like Julia, but i still haven't found a workflow I like. I still typically prototype in Matlab and deploy in c++/cuda.
I'm trying to work entirely inside the repl. Its not bad but for large stuff the lack of go to definition hurts, and there isn't an easy way to copy commands that worked into my module. I'll try vscode next. That is what I use for c++/c. Is there anything else I should try?
I use VSCode+repl which works well for me (although the language server for Julia isn't as good as a C/C++ one). The one other workflow to look at is Pluto notebooks. They're similar to jupyter notebooks, but they track cell dependencies and automatically dependent cells, to guarantee that you maintain consistent state.
I worked in pluto for a bit. I should give that another shot. If there is an easy way that someone simply runs the script, and it doesn't try to fire off plots, that would be the ultimate in self documenting code.
If you read the small print on that license it almost certain says something like "for hobby use only. Not for academic or commercial use". A MATLAB license that allows commercial use starts at around 2000$/user.
Which Lisp? What applications do you think it is slow at? Hint: It can be faster than C due to compiler macros, and infact, its regular expression engine is much faster than Perl's, which is written in C, just to give a concrete example.
My experience is that even writing "idiomatic", CLOS heavy code tends to be faster out of the box than most dynlangs out there. Writing close to C++ code requires a lot of manual work and is probably not even worth it unless for optimizing hot-paths.
Why not: from Common Lisp to Julia
https://gist.github.com/digikar99/24decb414ddfa15a220b27f674...