I remember that day pretty clearly because in the same lightning talk session, Solomon Hykes introduced the Python community to docker, while still working on dotCloud. This is what I think might have been the earliest public and recorded tech talk on the subject.
This was also the same PyCon that Eben Upton opened with a keynote to discuss the mission of Raspberry Pi, while also announcing that each of the 2,500 attendees would receive a free physical Raspberry Pi at the conference.
It was a pretty darn good year of PyCon; 2013 was apparently a good vintage for open source tech with Python connections.
Which is also what I thought about PyPy.
Both of which seem to have proven me wrong.
If it's the ecosystem that's of interest there are solutions for that too: https://nextjournal.com/chrisn/fun-with-matplotlib
I’d really like to see a 1.0 with let back in the core, but I don’t see that happening just yet.
Edit: typos, plus I’m also curious to see what https://github.com/gilch/hissp will do.
Nowadays it seems to work, but rewriting my code to catch up with today’s version (and cope with the other changes introduced over the last couple of years) is not something I’m interested in doing.
- there is py4cl and async-process: https://github.com/CodyReichert/awesome-cl#python
- NumCL is a clone of Numpy: https://github.com/numcl/numcl
Common Lisp has many advantages though:
- you can build a self-contained binary of your app, including web app. It's such a breeze to deploy, and it's possible to ship a binary to users.
- the REPL is much better and more fun than ipython
- CL is compiled. We compile our code function by function, and we get many compiler warnings at compile time.
- CL is stable, the syntax is simple, you can add the syntax you want with extensions (decorators, string interpolation,…) and yet, the implementations keep improving, and new ones are created (Claps, CL on LLVM).
- CL is fast (how pgloader got rewritten from Python to CL: https://tapoueh.org/blog/2014/05/why-is-pgloader-so-much-fas...)
- there may be more libraries than you think: https://github.com/CodyReichert/awesome-cl
- the object system is very nice, the multiple dispatch makes me have a cleaner and shorter ABI of my app.
- there are more editors with good support than Emacs: Atom support is getting very good, and there's more: https://lispcookbook.github.io/cl-cookbook/editor-support.ht...
- the (default) package management system works with "distributions", à la apt/Debian, so it doesn't break suddenly because of a third party of a third party dependency.
Of course, the web stack is less mature (but more promising: see Weblocks http://40ants.com/weblocks/) and CL might be more difficult. It's not for every one. But it's been a joy so far.
I'm not a dev, but, a Python user with a background in data/info science - so I'm a bit unclear on what Wikipedia means by things like "a dialect language of Lisp", and using it to write "domain specific languages."
Is this just a way to use Python libraries in Lisp (which seems to be a low level programming language?)
The "dialect language of Lisp" quote just means that it's a variation of Lisp, but running in a Python environment.
The term "domain-specific languages" refers to programming languages created for a specific, often one-of, task. If you look at this repo, you'll see various languages used to create diagrams and graphs in this library. These are all examples of DSLs (note, they're in many places, this is just an example):
Regarding Hy specifically, it's basically Python that uses a different syntax, or skin, of sorts. Instead of white-spaces and colons, you get parentheses and...parentheses. It's more complex than that, but it's basically just a way to write Python for people that like Lisps.
0. Yes, there are some examples of low-level lisps without features like memory management, but they're pretty unusual.
It would genuinely be worth your while to watch the SICP lectures with Gerry Sussman and Hal Abelson  to get an inkling of an idea of what "your program is just more data" can mean. Lisp is more about brining the language up to the level of the problem than bringing the problem down to the level of the language, and it's difficult to appreciate what that means until you've seen it. By the time you get to the end of lecture 3B, it should click.
Because I'm still a bit confused at what Lisp is!
You're in for a treat!
For most practical reasons is pretty low level. You don't really write in a language you write in AST directly, which while powerful IMO is not very high level.
For example the LOOP macro introduces a non-prefix syntax:
for i below 100
for j = (random 100)
sum j into s
while (< s 1000)
finally (return i))
Generally Lisp is low to mid level as a programming language, with a bunch of features which can be considered high-level: extensive macro system, Common Lisp Object System, extensive error handling, ...
I believe that no programming language with automatic memory management can be considered "low-level".
That aside, why would you call Lisp "low- to mid-level"? Commonly-used implementations are at about the same level of abstraction as Ruby or Python.
That's a view of a whole language and its implementation. Still it may have a range of features which are low level.
Lisp for example has Foreign Function Interfaces (FFI) with low-level interfaces and manual memory management.
Basic Lisp stuff like CAR and CDR were (almost) instructions on a CPU ( https://en.wikipedia.org/wiki/CAR_and_CDR ).
Something like a cons cell (the building block for lists) is basically a two-element vector. Lists were made by chaining them together via a CONS operator, which creates such a two-element vector.
Such a linked list data structure is pretty low-level and the typical mark&sweep GC of the early days is also relatively basic.
There is not much magic to it.
Many other programming languages have much more complex basic data structures (see object-oriented programming in Java with classes and instances, inheritance, interfaces, namespaces, ...). Compared to that the basic linked list in Lisp is primitive.
> I believe that no programming language with automatic memory management can be considered "low-level".
See the standards for Scheme or Common Lisp. There is not a word about automatic memory management in the specifications. Automatic memory management is a feature of an implementation, just like foreign function interfaces. Most implementations have a kind of garbage collector. But most implementations also have manual memory management.
People even write operating systems in Lisp sometimes: https://github.com/froggey/Mezzano
but didn't get much attention at the time:
BETTER IS BETTER THAN WORSE:
TALES OF A WEEK WITH HY AND A CALL FOR REVOLUTION
by "Mr. Mojo Rising"
Also, I generally find the libraries to be amateurishly designed. Big one that comes to mind is Pandas.
Kawa [Polish for coffee] is a language framework written in the programming language Java that implements the programming language Scheme, a dialect of Lisp, and can be used to implement other languages to run on the Java virtual machine (JVM). It is a part of the GNU Project.
(I later ported it back to pure Python 3, because I couldn’t move forward while Hy was in flux)
If let was “fixed”, I’d probably move back to Hy.
- On CPython, the generated Python code tends to be noticeably slower in hot loops than the equivalent "native" Python code. I haven't done any testing with PyPy.
- Can't really use it at work since nobody will know how to read it.
My conclusion is: it's fun, it's Lisp, and it's Python. For personal projects and small standalone tools it's perfect.
Most things are just a (import x) away.
The one thing that doesn't work is autodoc/Sphinx. Otherwise it's all there, and quite pleasant to work with given a rainbow-parenthesis extension and Parinfer.
Have to resort with using web archive: https://web.archive.org/web/20190714180208/http://docs.hylan...
(defna add [a :: int b :: int] (+ a b))
You could expand that macro to support decorators as well via extra arguments before the args list, you'd be in great shape.
Usually lisp and FP means that you get immutable data structures and a good concurrency system, both of which you get in clojure.
I looked at this a while ago and figured it's not useful for production, but an interesting project none the less.
No, Lisp and FP wasn't traditionally tied to "immutable data structures and a good concurrency system".
That's something that got in fashion much later (let's say < 15 years ago).
In discussion pre-2010 for FP in HN, few mentioned immutability, or "good concurrency" as typical FP traits, if they were even mentioned at all. The same is true for old Paul Graham posts on Lisp. Those were just another tool in the toolbox, not something inherently FP. Haskell wasn't what "FP is really about" either.
FP back then was all about first class functions, map/reduce/filter/and co, homoiconicity, code as data, macros, and DSLs.
Paul Graham is also a pretty poor example in my opinion. He has traditionally used a very Scheme like Lisp style. In 'On Lisp' he starts off by advising the reader to structure their programs by trying to keep side effects mostly in the outer layer of the program. He's not a FP purist who considers destructive procedures a sin, but he definitely advocated for immutability (just not under that name).
FP discussion outside of the academic context (and especially on HN) before a certain date in the past 1-2 decades were all about Lisp(s) - and to a lesser degree Scheme(s).
Doesn't matter whether that was "really FP" (real Scotsman), that what was was discussed, advocated, and practiced by most as "functional programming".
ML(s) played very little into the discussion, and purity/immutability was discussed but not as the most important thing (rather as the more pure/mathematical version of FP, but not necessarily practical for professional programming, that came later, and Haskell and then Clojure popularized it).
But interestingly enough, idiomatic common lisp code can be roughly play in the same field as Python in terms of "purity". Also, lot's of scheme code is almost imperative in nature, if you look at it from the perspective of a Clojure or Haskell programmer. This is why Paul Graham once gave the advice to use Python if you cannot use Common Lisp.
I think Richard P. Gabriel wrote a "complaint" about functional programming being redefined by the research/types/ML crowd (he wasn't so much complaining about them striving for purer FP, but for taking over the conferences, etc. driving out the more traditional FP traditions from the lisp communities).
Anyways, what I wanted to say is: FP is fairly broad as a topic, a lot can fall under it or out of it depending where you draw the line. That's not a bad thing and there is space for mutable lisps.
The other aspect is: If you have Python, there is not much to gain from s-expressions alone. sure you get macros, but otherwise you already have a lot of the benefits you get from Lisp over let's say C++ (dynamic typing).
Scheme has pioneered or picked up a lot of advances for making programs more functional, for example lexical scoping, closures, etc. that were not present in every lisp.
What's a bit funny is that I have experience some Schemers sneer at CL for "lacking the functional elegance" and common lispers sneering at Scheme for not being a true lisp with namespacing, etc.
One observation I have is, that in my career I start to see more patterns in code (they were probably there years ago but I wouldn't have recognized them). Sometimes I see monadic patterns in otherwise object-oriented code bases, or I see dependency injection realized in functional, immutable code bases. Maybe it would be time to revive the pattern movement to recompile common patterns and aim at a modern description of these. I feel like a lot could be gained .
 Also I'll walk out of any interview were I will be asked again for the visitor pattern. so tired.
The former implies immutability and technically belongs to the declarative side of things: A mathematical function declares a relation between input and output, but I say technically because functional code can read rather algorithmic.
The latter can be added to imperative languages that do feature mutable state. Depending on the language, I might still classify it as functional, but impure.
Lately, I think, there is even a de-emphasis in _functions_ as the base of (purely) functional programming. What's now beimg emphasized is the composability-aspect. Functions are an abstraction that is composable, but so are other abstractions.
FP, yes. Lisp? No. Lisp data structures are generally mutable, and I don't think concurrency is a key aspect of Lisp (just look at Emacs Lisp for something with much worse concurrency than Python).
I can see even in larger python codebases some places where e.g. support for some s-exp minilanguage could be useful and implemented rather trivially via Hy, or also to embed some processing kernel which is well suited to being written in lispy semantics (e.g. symbolic programming, etc). Another use could be to define core primitives in python and glue them together using a heirarchy of configuration+Hy macros to generate specialized '__main__' programs for task-specific use cases.
Python doesn't implement a standard virtual machine. The best you could do is some kind of translation to CPython bytecode, which doesn't really buy you much (and loses compatibility with other Python implementations like PyPy).
You're likely being downvoted because statements like these are more a reflection of you than the language, but instead of phrasing it that way, you come across as making an absolute statement about the language.
In any case, I'll bit.
I'm not a Lisp programmer - the extent to which I do Lisp is the occasional Emacs Lisp. Still:
You can use virtually any character as part of your variable name. Lispers tend to use hyphen and '?' a lot. Once you get used to it, it's hard not to view all other languages as inferior. Why did they restrict the variable names so much?
A lot more substantive: I saw this in John Kitchn's blog post somewhere. My details are probably a bit off, but I think I have the general gist of it. His team works on, I believe, computational chemistry. Their code base is in Python, and they've developed an API for various aspects of their calculation. A lot of their functions require a lot of arguments - not generally a good idea, but it makes sense in the domain they work in. To manage that headache, they've built a systematic way of documenting all those arguments in their docstring, with a particular format.
Unfortunately, that means anyone in his team who creates such a function needs to conform to that format, and people get lazy, make mistakes, etc.
So using Hy, he wrote a macro for another syntax for defining a function. With this new way of defining a function, there was special syntax for all the special information that needs to be in the docstring. The macro then creates the Python function with the appropriately written docstring. If the programmer makes a mistake, the macro fails. So now everyone uses the macro to create those types of functions.
I don't recall the details, but let's say it was something like:
deff func(x, x_doc="doc for x", y, y_doc="doc for y",...)
(I think in reality it was more than that - not just x_doc but x_doc_a, x_doc_b, etc and it generated the docstring from all those pieces).
Now are there other, perhaps Pythonic ways to do this? Probably. However, it is nice that for a domain specific problem, you can just invent new syntax and use it.
I do not care about downvotes on here.
> You can use virtually any character as part of your variable name. Lispers
This is not a good thing.
> However, it is nice that for a domain specific problem, you can just invent new syntax and use it.
This is equally not a good thing.
1. I don't care
2. Not a good thing
Is not adding anything to the conversation. Have you examined your internal need to spend time commenting on this?
```[a-zA-Z_][a-zA-Z0-9_]*``` is such a universal and useful convention that I find myself using it even when the domain allows other characters (bash, Xnix filenames). Obviously this is English-centric so I would include other language constructs if I were in a country with another dominant language.
In fact even though I have the ability (languages supporting unicode and a QMK keyboard) to use symbols, e.g. Greek letters for math, I don't, because I know it's a hassle for anyone else.
> You can use virtually any character as part of your variable name.
> invent new syntax
are a hassle and anti-pattern for the same reason.
However if these features are exposed through clean abstractions in a language that is syntactically bare to begin with, you might have different thoughts.
Kind of a harsh assessment don't you think? Maybe some people enjoy working in Lisp-like languages while also enjoying Python's huge ecosystem of libraries. I doubt it's worthless for those folks.
I’ve written both Racket and Common Lisp professionally and consider myself very competent in both. In fact, I think they’re both great languages and one of the best around.
It’s not because of the parens though, it’s because what the parens give you: namely homoiconicity and easy macros.
Without these things, the increased syntax burden of parens don’t buy you anything except visual noise and being forced to type extra characters.
Symbols tokens made up of almost any sequence of non-whitespace characters: if you type <->, that is a symbol without having to modify the lexical analyzer of the language to recognize a new token pattern.
Basic math operations being variadic:(+ term0 term1 term2 ...).
Here is an advantage of Lisp syntax: -123 is an actual negative integer token, and not the unary operator applied to 123, requiring evaluation.
It also makes it easier to edit structurally.
So yes homoiconicity and easy macros are nice but please refrain from including me in your use of "no one"
So instead of f(x), I have to accept the visual noise and extra characters of (f x)?
If anyone really cared about parens, they wouldn’t be using (or creating) these kinds of things in the first place. I think it’s a pretty indefensible position to say that Lispers actually just like parens for the sake of them. If that was the case, why not start a form with (( or even (((?
Like, does anyone like using progn or begin? I mean seriously..
Syntax is great, DSL’s are great, they help you to express things elegantly and succinctly. This is my major beef with Lisp. If I’m not manipulating the AST, I don’t want the heavy syntax burden. If I am, then great, Lisp is awesome.
Maybe my career is different from you guys, but I’m not transforming source code even 5% of the time. I’m adding numbers, I’m multiplying stuff, I’m writing out formulas. And let me tell you, trying to write out math with S-exps majorly sucks. You would have to be insane to think that “(+ (fib (n - 1) (fib (n - 2)))” is better than “fib(n - 1) + fib(n - 2)”.
Sorry if this sounds like a rant (it kinda is), but I don’t think anyone thinks syntax is bad. It’s just that Lisp has minimal syntax and that seems good enough. You don’t notice all of the nice and convenient things you would have to give up if you actually had a completely regular syntax.
It hasn't. It just works differently.
S-expressions define syntax for data: lists, strings, numbers, symbols, vectors, arrays, ... Plus a bunch of abbreviations (quote, function quote, ...) and some extras (templates, reader control, ...). One can program that part with reader macros.
Then Lisp usually has around three (or more) syntax classes for prefix forms:
1) function calls
2) special forms like LET, ...
3) macros like DEFUN, DEFCLASS, WITH-SLOTS, ...
Special operators and macro operators introduce syntax. See the Scheme manuals or the Common Lisp spec for the EBNF definitions of these syntax operators. This can be simple or relatively complex (LOOP macro would be an example, but also something like DEFSTRUCT, ...). The user can write macros to extend the syntax.
Now if we want infix/mixfix arithmetic, we can embed it via the reader
#i( fib(n - 1) + fib(n - 2) )
or as a macro:
(infix fib (n - 1) + fib (n - 2))
Problem: it's not built-in and not that common.
> I’m adding numbers...
Now one would have a bunch of options:
1) live with the Lisp syntax as it is, improve it where possible
2) use syntax extensions for mathematical code, may have tooling drawbacks
3) use a different surface syntax for Lisp or a specialized syntax (see Macsyma and similar)
4) use a different language with the usual mix of prefix, infix and postfix operators
(f x) and f(x) have the exact same number of characters; one mans'visual noise' is another's 'visual clarity'
Also if you think parens and prefix notation are so great, would you also like to see math papers written that way?
The S-exp can be shortened in anything calling itself a Lisp, using the unary reciprocal: (+ (/ 2) 5).
Math papers are full of obscure notations; they should standardize on s-exps. Then just a straightforward dictionary of symbols would be needed to look up a notation.
Math notation is at least 2D: it makes good use of spatial structures to reduce ambiguity. For instance, instead of inventing some crazy ternary operator, mathematicians will just arrange three arguments spatially around a symbol. The space subdivides recursively, so elements end up being positionally determined, somewhat like what S-exps do in 1D.
That being said, a lot of mathematical notation is pretty close to lisp syntax, basically anything that works like sigma.
y op ---