Hacker News new | past | comments | ask | show | jobs | submit login
The Wren Programming Language (github.com/munificent)
164 points by api on April 28, 2018 | hide | past | favorite | 30 comments



This note on benchmarks/performance is a gem:

> LuaJIT is run with the JIT disabled (i.e. in bytecode interpreter mode) since I want to support platforms where JIT-compilation is disallowed. LuaJIT with the JIT enabled is much faster than all of the other languages benchmarked, including Wren, because Mike Pall is a robot from the future.

Looks like a nice embeddedable language.


I'm not an expert on this, but offhand I don't see anything fundamentally different between the capabilities of Lua and the capabilities of Python, Perl, Ruby, JavaScript, and other widespread scripting languages.

So why isn't there a PythonJIT, PerlJIY, RubyJIT, etc., comparable to LuaJIT?


I think mostly because of missing a Mike Pall :)

Effectively though, Ruby/Perl are just much larger languages. E.g. ruby has/had at least 5 different ways of controlling flow in a block between throw/catch, next/break, raise/rescue, return/implicit-return and callcc. 99% of the code uses the same small subset, but if you want to support "Ruby" you need to implement all of those.

Likewise, Perl has maybe simple semantics underneat, but the sheer amount of syntax-level elements is insane, also because of stratified version changes. e.g. did you know that declaring a variable with "local" instead of "my" makes it behave like a lisp-ish dynamic variable? I am fairly sure that does not exist in Lua.

As for python, there is PyPy, but I think the retrospective view on Unladen Swallow has some good points, basically: lack of enough interest from devs, lack of enough interest from users.

http://qinsb.blogspot.hu/2011/03/unladen-swallow-retrospecti...


Python also had Dropbox’s Pyston, that mostly failed:

https://blog.pyston.org/


PyPy is a JIT for Python that's usually faster than the CPython interpreter, but not quite as fast as LuaJIT.

One of the reasons is that Python is a relatively more complex language with stuff like varargs or dict-unpacking that need to be considered at every call, and properties/__getattr__/__getattribute___ inserting themselves into the path for a simple field access. That might not be much worse than Lua's metatables, but the standard library makes use of those features in lots of places, so you can't really avoid them.

Additionally, IIRC LuaJIT uses hand-rolled assembly where necessary, while PyPy is written in Python and converted to C by the RPython toolchain using a process that involves abstract interpretation of the bytecode obtained by reflection on live code objects. That allows making use of all kinds of complex metaprogramming at initialization time to simplify the implementation, but the generated C code is unlikely to be maximally efficient.


One of the main reasons for python is that a huge part of the ecosystem depends on C extensions written against the reference python implementation C Api. Getting this to work with good performance with a JIT is very difficult. Pypy is just now getting to the point where most C extensions work with reasonable performance. Therefore for many python users there hasn't been much advantage from a JIT.


> I don't see anything fundamentally different between the capabilities of Lua and the capabilities of Python

I don't know much about Lua, but one of the problems with compiling Python (or even just cutting corners to speed up execution) is that it is extremely reflective. At runtime, you can inspect all the objects in the system, including the bytecode you are executing and the entire call stack. And worse, you can rewrite all of this at runtime, at will. Self-modifying code is a bit of a pain to compile.

Of course almost all Python code does not do this, but if you want to actually handle the full language, you must be prepared for all of it.


in addition to the other answers, there exist Jython and JRuby which effectively bring JIT to Python and Ruby.



> LuaJIT with the JIT enabled is much faster than all of the other languages benchmarked

I'm wondering if somebody knows about a good benchmark showing this? The ones I found in a cursory google search all seemed to show that for example node.js on V8 outperforms or matches the speed of LuaJIT.

EDIT: Seems like they only compared to python and ruby which explains why LuaJIT was always the fastest interpreter in their benchmark: http://wren.io/performance.html


Wren is a gorgeous accomplishment. The speed gains are thanks to careful design, such as emphasizing static definitions of classes, private functions, constant slots, and computed jumps.

If anyone asks you "what does good code look like", Wren is a great example of thoughtful tradeoffs and excellent explanations.


Thank you! <3


This is from Bob Nystrom who's also behind Crafting Interpreters [1] and is a general PL geek.

[1]: http://craftinginterpreters.com/


Unfortunately, Java is a poor choice to implement a programming language with and there's nothing about implementing static typing in this book. I honestly believe that most folks would be better off reading SICP or similar. What is needed is a book that can make more modern ideas (and more modern languages) accessible to industry.


The book is more about explaining how things work and showing how they're implemented in a simple way. With this premise I think Java is a fair choice because almost everyone knows it. If you're writing a book where you're showing every single line of code of the implementation not having to explain the programming language used is a huge benefit.

Regarding type checking, I believe it wouldn't be too hard to add it to the language itself, but it would increase the page count of an already long book.

All things considered I think the book does a good job of explaining the concepts in a very accessible way without handwaving any detail.


I understand the benefits of a single numeric type, but a type that doesn't support full 64 bits is just painful. It makes documentation smaller and benchmarks look good but often becomes a problem in real usages.


Wren does support 64 bit floats. For 64 bit ints, I find it's most awkward for interop with other languages, such as when using protobufs. If you control both ends, you can avoid this.


Found this while looking for good embeddable languages and it looks almost too good to be true: fast, relatively modern, secure, potentially concurrent, lightweight, ... seems like it's got everything but a pony.


I've used it in an experimental OS for the sort of work one might use shell scripts or small configuration programs for in Linux. I can say that literally the only annoyance was that back then the API docs had some "headings only" sections, but I see that pony is included now.

Having used it, I can recommend it without reservation.

Upon reflection, I should probably say why I chose Wren. The OS does not present a Unix-like interface to user land which makes the C RTL not really a good fit (more like a raw L4). Rather than encourage a mangling into a Unix mentality for programs I keep the C library to a minimum and encourage the native constructs for communication, storage, IPC, etc… Initially I was using Lua, which is ok, but the aggressive overloading of constructs into tables for implementation makes for sort of a muddle of a language filled with unintended compromises. There are a lot of accidents waiting to happen. The large Lua ecosystem which was initially a draw ended up annoying me in the end because of the lack of a standard solution and the incompatibility of different choices. Lua works, it's fine, but ultimately I wanted something more precise. I saw Wren mentioned on Hacker News one day while annoyed with some library's use of Lua coroutines, added Wren support in an afternoon, and haven't looked back. Ultimately I'd prefer something with compile time safety and clean expressivity such as Swift, but so far every time I think of retargeting Swift I can come up with other projects.


You might also want to look at Pony. https://www.ponylang.org


Well, the one thing Wren is missing is static types. But then it probably wouldn't be small anymore.


Not necessarily. The compiler would become larger, but the VM would not. (I'm assuming that the VM can be decoupled from the compiler like Lua for restricted environments).

Statically-typed 'scripting' is a very interesting niche - e.g Lily. Not to underestimate the amount of thinking needed to devise a type system which is sufficiently powerful to be (a) correct (for some value of 'correct') and (b) as expressive as dynamic typing.


> The compiler would become larger, but the VM would not.

That's true in theory. But, in practice, if you ask your users to deal with the cognitive load of static types, they (reasonably) start expecting the performance of static types in return. At that point, you're doing a statically-typed VM, which is quite a bit more complex than a dynamically-typed one, I believe.

> (a) correct (for some value of 'correct') and (b) as expressive as dynamic typing.

Right. This is the big one, in my opinion. You can design a type system that's simple, sound, and expressive. But you only get to pick two of those adjectives. Today, because programmers are delightfully spoiled by a wealth of nice, expressive type systems, I think they won't settle for simple and sound.

Dart tried simple and expressive, but the lack of soundness made tooling much harder and really confused users who felt like they were being sufficiently penitent in their type annotations but still got punished for the sins of unsoundness anyway — type failures that were only caught at runtime.

So I think if you're designing a type system for a language today, you end up pushed pretty hard towards "sound + expressive", but now you're talking about a really complex beast.


> Today, because programmers are delightfully spoiled by a wealth of nice, expressive type systems, I think they won't settle for simple and sound.

To me that basically describes Go's type system. Very simple, allowing my brain to grok it, and part of what allows for Go's fast compile times. I'm not entirely sure what's meant by "sound", however. Safe? What kind of safe?


I find that type inference solves most of my woes of static types. At least, I've found myself reaching for Go to write quick scripts that I'd previously use Python or Node for before, especially when I need concurrency. Go doesn't have this, but for a scripting language, type inference (where possible) for function returns could be nice too.


Yes, but I don't think it is intended to be used to build huge systems.


I had considered it for embedding on an ARM micro for C code to be able to download and run test scripts.

Abandoned because it seemed like a dead project with no official or community support.

It’s suprisingly difficult to get a Turing complete scripting language for micros that doesn’t take up 75k rom.

I’ll take another look, but for the most part I don’t know who has the time to gamble on unknowns like this for anything but hobby.


GitHub seems active.

Lack of a big community can be explained by the language's role as a domain specific embedded language and it simplicity.



the language manual is simple, easy to understand and very informative.

I am impressed!

http://wren.io/syntax.html




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: