I had the pleasure of contributing to Gravity a few years ago. I added builtin filter, map, reduce, reverse, and sort functions and tests to the array types. Marco was very receptive to my additions, and it felt very welcoming to newcomers. I learned a lot about different internal workings of interpreters making those contributions.
I imagine this would be an alternative where embedded Lua gets used. A comparison with Lua would have been great to see. The 200K runtime, for example, indicates that this might weigh in somewhat heavier than Lua.
Lua is indeed difficult to beat in this respect. If you need performance you can even embedd LuaJIT with a footprint still far under one megabyte. And if you don't like the Lua syntax, there are already good alternatives,e.g. the statically/strongly typed https://github.com/rochus-keller/Oberon which directly generates LuaJIT bytecode (based on https://github.com/rochus-keller/LjTools). Here is a fairly complete list of languages compiling to Lua: https://github.com/hengestone/lua-languages. Gravity could be one of these if need be.
LuaJIT development has effectively halted, and only supports Lua 5.1 (5.3 is current, 5.4 imminent).
I get the feeling that Lua is slowly on its way out because the academic nature of it never meant it had to be more than "good enough" outside it's core strengths.
I think they should have kept a super-light "classic" Lua where ~= still means !=, there are only tables, and all the other quirks (but also speed). Then add official, optional support for classes, lists, dicts, sets. This probably can all be built on top of setmetatable, so be pure lua, but at least it comes as one consistent standard library and not in the form of a fractured ecosystem.
All the mentioned compiles-to-lua Projects tried something similar, but never gained significant traction, probably because of the lack of an official blessing.
> LuaJIT development has effectively halted ... never meant it had to be more than "good enough" outside it's core strengths
That's a wrong impression. Even the original author of LuaJIT still commits regularly and there are dozens of companies maintaining and using it in industrial scale applications, and there are a couple of forks which focus on specific use cases and are independently developed. Not to forget the hundreds of games using Lua as a scripting language.
> only supports Lua 5.1 (5.3 is current, 5.4 imminent).
Which is obviously enough for most people using it including me. I don't see any unique selling point which would force me to update to Lua 5.3 or higher.
> and all the other quirks (but also speed).
Well, do your research. LuaJIT is among the fastest JITs available (e.g. a factor ~1.5 faster in geometric mean than JS V8), but with a much smaller footprint.
> 5.1 / 5.3 -- "don't see any unique selling point" -- "there are a couple of forks"
Doesn't that show that the actual users of Lua and the authors have drifted apart, and the former are not willing to commit to a hard fork or taking over development? Or both sides could come together to form a much-beloved committee. The curse of "good enough".
> quirks / fastest
I think you misunderstood: Yes, it is fast even without LuaJIT, and super easy to embed and integrate - which is why this slow fading away into the embedded only realm would be quite a shame.
But it is also "quirky", and adding some (optional, possibly slower) syntax sugar over it could make it more appealing. Sure, Lua-fans will argue that tables are better than anything, but for those who just want to transfer existing language knowledge are hard to convince, and setmetatable makes everything even weirder.
> this slow fading away into the embedded only realm would be quite a shame.
I assume you mean "embedded use in applications" (i.e. not embedded systems). That's exactly what Lua was designed and built for by its original authors. An that's how it is mostly used.
> adding some (optional, possibly slower) syntax sugar over it could make it more appealing
Lua can do surprisingly much for this lean and simple syntax. If instead you prefer a baroque, pretentious syntax like Python or TypeScript, and are obviously satisfied with Python's performance, you already have a well established solution. And as I've demonstrated e.g. with https://github.com/rochus-keller/Oberon you can replace Lua by a more complex language and still profit from the performance and leanness of LuaJIT.
For me the biggest things with newer luas would be things like the bitwise operators, 32bit numbers and utf8 support. I haven't played with 5.4 yet but the garbage collection changes look rather nice too for some of the embedded work i've done with lua
As someone who just worked on a project where I embedded lua, that's exactly what I was looking for. Why would I want to choose this over lua and what's the pro/con matrix?
Agreed, Lua seems to be the de-facto standard here, as it's so lightweight to embed. But the language has some pretty janky rough edges (e.g. only having Tables and no other data structures) so a modern alternative would be nice.
Yes, Lua tables are incredibly versatile tool. I'm not a fan of global-scope-by-default and of verbosity, but other than that it's a fantastic language.
Global scope by default isn't good practice in embedded. Global scope isn't synonymous with static allocation, but unfortunately that goes over the head of many people. Best practice in embedded is to have locally scoped statically allocated data passed around by reference.
It really depends on the application. For smaller, more targetted embedded applications, global scope is fine. If every function needs to have access to a common set of variables, there's really no need to be passing references around.
Even if the application is tiny, that's no excuse. It makes it even easier to chuck everything into a single struct type. It's not like there's any performance gain to having globally scoped stuff on a modern compiler compared to passing references.
One reason using global variables is bad practice is it makes testing harder. An unfortunately high percentage of embedded software doesn't have any sort of harness-based testing because it's written with globals spammed everywhere, which prevents you from using any kind of principled testing strategy where you mock out all the hardware dependencies. It's especially bad if there's globally defined MMIO stuff like "#DEFINE CCR1A (uint64_t)0x74FEA10". Good luck testing that!
Smaller embedded targets don't have modern C++ compilers. Also many engineers want to solve domain problems instead of dealing with C++ related problems.
In domain of C, passing by reference means passing pointer. If you chuck everything into single struct and pass by pointer, it has same problems as global scope.
Not that I'm advocating for global variables. Even tiny projects tend to grow with time, and localizing scope across the code base is not fun at all. In context of Lua, I've just trained myself to prefix variables with 'local' and I don't give it much thought.
> If you chuck everything into single struct and pass by pointer, it has same problems as global scope.
Not true. In fact, this refactor is one of the best things you can do to improve an old shitty embedded C codebase. Among other benefits, it allows you to have multiple instances (an arbitrary and easily-adjusted number, in fact) of a system sharing the same memory, reduces the complexity of linker-related BS, and simplifies testing. It's vastly better than relying on horrendous C cross-module scoping rules for sharing.
To me the main problem of global is that any module can mutate and affect any other part. All encapsulation and modularisation is then leaky and you are always on your toes about implementation details in some other part of code. Your approach does not attempt to solve this downside of globals.
I agree that passing in structs is vastly better than communicating over globals. On the other hand, taking existing code base and implementing this state-passing is quite large undertaking that affects all function declarations/implementations. It might be beneficial, but there are often better investments of your time.
A lot of people who don't like Lua list "only having tables" as some sort of negative, but honestly I don't see how that is bad in any way.
1. Having only 1 data structure makes for a simpler, more elegant language. Once you understand tables, you understand everything you need to know about lua.
2. You could implement any other data structure with a table.
* A table is basically a dictionary / map already
* A table can be an array if you use numbers as keys. They don't even have to start at 1.
* A table can easily be made into a set if you use the elements of the set as keys and 'true' as the values.
* A table can be made into a proper OOP class with metatables
* A table can use prototypical inheritance too
* A table can act as a basic data record object
Those using the language don't really miss having the more specific data structures.
In my experience, it took a bit of conceptual work to get used to the "Lua way", and that impedance mismatch would have been reduced if it shared more similarities with the other languages we were working with.
Context is important IMO; if you're going to using a language as your primary development tool, then you'll get over that hump fairly quickly, and my objection is less relevant. But for our use-case (embedding Lua in a C application to have user-provided scripts drive our C library's callback hooks), the developers were primarily working in C and Python, and so most of us didn't use Lua often enough to really click with it.
For our use-case, something a bit closer to Python or Java would have been much easier to grok, and therefore would have made our development easier and more productive. "Easy and more productive" is all I'm really looking for in a language, always within the context of the specific usecase of course.
I'd be fine with dropping most of the sugar in Python or Java, but I'd be surprised if there were no "zero-cost abstractions" that could be added to Lua without making it too heavyweight for embedding.
There are a lot of alternatives. See https://github.com/hengestone/lua-languages. These languages use the Lua or LuaJIT VM as a backend, either as a Lua transpiler or a bytecode compiler.
To be completely honest with you, I didn't even realize I could scroll. That full-page landing page did have what I wanted, a single-sentence summary and a "get started" button to take me to the documentation, which loaded impressively quickly. I just found out that it was because it was on the same page.
You always could, any of the thousands of React Native, Flutter or NativeScript apps are running a VM. What you can't do is run code that the user provides, or use it to sidestep app store updates (with plenty of exceptions for both rules).
There are iOS apps that allow you to edit and run python code.
The amount that it can do with the OS and stuff is fairly locked down, but in the version I used to have installed (incompatible with iOS 11, so I can’t use it anymore) iirc even allowed using import os to use some shell commands (though, of course, the shell was very locked down).
I think what you're thinking of is the JavaScript JIT situation. A JIT compiles and executes arbitrary, new, native code at runtime. A regular interpreter doesn't create native code.
> Closures are self-contained blocks of functionality that can be passed around and used in your code. Closures can capture and store references to any constants and variables from the context in which they are defined. Closures can be nested and can be anonymous (without a name)
I think, the term "closure" gains some unnecessary semantic meaning of a block of code / anonymous function. I might be wrong, but It is better to think about it as simply a technique for implementing lexical scoping.
In other words, "lexical scoping" is a property of a language, while "closure" is only an implementation detail to make lexical scoping work. So the term closure does not have to leak in the description and semantics of the language itself. What is your opinion?
Edit: I just think that such proliferation of terminology confuses people, making them ask questions like: "What is the difference between 1) function, 2) anonymous function, 3) lambda function, 4) closure?" Instead, focusing on the idea of a function (possibly without a name) + lexical scoping clarifies everything immediately.
Closures aren't necessary for lexical scoping, and in practice they're only real use (in imperative languages, obviously quite different in e.g. Haskell) is to pass around blocks of code combined with captured variables. I think this definition is quite reasonable.
But such a closure block is a function (it might be sugared like a ruby block or something like that, but it remains a function that captures its environment, thus implementing lexical scoping).
Well, okay, if a language does not normally have lexical scoping, but has a special "closure" feature to implement it (dunno, maybe C++ lambdas may be regarded as such a special closure construct?) That could be a justification for the term closure on its own. But if the language has lexical scoping by default (e.g. Gravity says that it has lexical scoping in its overview), then, I think, there is no need in a separate notion of "closure" in the language semantics.
Lexical scoping doesn't necessarily include lexically scoped function definitions.
Lambdas have been spreading lately, but plain old functions are still not lexically scoped in a lot of languages otherwise regarded as such, C/C++/Java/Pascal etc.
What do you mean by lexically scoped functions? A nested function?
My point is that instead of reusing the term closure, one could use the term anonymous function. To rephrase my question: Do you have a specific example where a nested function (or an anonymous function) is not lexically scoped (In a language that is otherwise lexically scoped)?
Actually, I know Ruby messes up its functions this way:
def f()
x = 1
def g()
return x
end
g()
end
puts f()
undefined local variable or method `x' for main:Object
(repl):6:in `g'
So, the nested function g() cannot capture the scope. I think they had some justification for this funny behavior, because def is not a function, but actually a method (I could be wrong). But in any case, there are examples of languages that don't allow nested functions to capture environment (while a proc or block would capture it).
After having looked at it for a minute, it looks like a very simplified version of Swift. Looks great! Might try out the language on my Raspberry Pi and see how fast the thing is :p
I never really used it, but I did make some contributions to the language. In looking through some of the ecosystem, it seemed like the library ecosystem left a bit to be desired, but the standard library does show promise. I think its extensibility (proven by my ability as a noob to make useful stdlib contributions) and use cases in embedded applications mitigates the lacking ecosystem somewhat.
The first thing I want to know about a new language is its performance. Then I can start thinking meaningfully about the tradeoffs of using it compared to other languages.
For an embeddable language performance probably isn't the most important property for typical use cases.
Ease of integration, interopability with the "host" language, the embedded size of the scripting engine or compiler etc..., all might be more important.
The fact that it sets null to uninitialized variables is mindblowing for lang created within the past 5 years. We all have learned that there "null" is there just to create bugs.
Please Consider creating a Gravity 2.0 with this fixed as a compile error, you will save a lot of debugging time to your users.
This is a dynamically typed language, it has nothing in its static type system to distinguish between definitely-assigned values and optionally-assigned values. That is, there's no way to tell the compiler that a variable or function parameter or return value is a T vs a Maybe<T> or Either<T,U>.
At best it could perform definite assignment analysis on variable declarations, but it can't go very far. In particular, it would do nothing for function return values.
As a result, the best it could give you is runtime errors when you e.g. access the value of a dynamic Maybe without checking for presence first. Without static analysis, this isn't hugely useful.
Uninitialized variable access trapping at run time seems useful since it's likely an error. Some statically typed languages do comparable things too if their static TS is not very powerful, see eg trapping representations or IA64's NaT in C.
Pure FP isn't the only acceptable programming paradigm, please stop pretending that it is.
(For context: a language without null pretty much implies an Option type, which is mostly useless (or no better than null itself) without static typing. Once you're that far you're probably gonna want to get monads to make them workable and congratulations you're reinventing Haskell)
First: Yes null VS option type is all about static knowledge so it's pretty irrelevant topic in context of dynamic language. In my mind in such languages null propagation operator is basicly all you need.
But cmoon it's not about FP. Checkout C# value types nullable<T> for example. C# 8 brings same principles to reference types (however being non-safe since you cannot break compatibility). Only good reasons to have (Java like) null is to either because you already did it and you cannot take away or you are in ecosystem where it's fundamentally in.
And to be precise it's not about null per se. Null can be just fine but it shouldn't be valid value for every type. You can solve it either by boxing (traditional option type) or some Type Scripty flat: Car | null ideology.
In some sense all languages with Java like null have some option types buuut they are missing regular non-nullable types.
Actually, it does. If you want a sound and complete static type system for a practical language, you end up with monads (or something very close to it, like streams or effects) or you cannot type your side effects (and then your type system is either incomplete or your language impractical).
What does it mean for a type system to be “complete”? Type soundness means that well-typed programs do not “get stuck”, but it is not obvious at all (at least to me) what it means for a type system to be complete, formally speaking.
If you just mean that a program has certain properties (like performing IO) that are not expressed in its type, then I can see what you mean. But even in Haskell, some properties like “this function might fail to terminate or throw an exception” are not expressed in the type (although you can use monads for those things in languages where all functions terminate by default). There are no distinctions between a function that performs network accesses and one that does not, or functions that never return odd numbers, etc. There are infinitely many program properties that you can come up with which are not expressible in Haskell types.
Even dependently typed languages with very expressive type systems cannot capture all possible program properties due to logical incompleteness results.
A type system is complete when every error-free program is well-typed. This is, afaik, seldom the case.
I was using the term more informally. In fact it is the operational semantics of your hypothetical language that is not complete in the sense that you will have a hard time putting it into a form that allows you to do anything meaningful (e.g. proof absence of certain errors) with it.
Interestingly, you can define such a language with side effects (think ML), by actually pulling the state monad into your meta-level (for instance by turning your reduction relation into an instance of a state monad).
Practically, what ways to "type your side effects" are useful and not only mandatory formalities to appease the compiler?
For example, writing something or something else or nothing at all to the standard output doesn't appear likely to influence the execution of the rest of the program, even if formally the program is in a different state.
It's just saying that your type system is incomplete if it doesn't capture side effects. That is, if the function can do something that is not expressed as part of its return type, its return type is insufficiently expressive.
As an example, imagine an "updateDB" call that returns void. Totally could exist, and be useful, but all it does is a side effect; as a function you haven't expressed its functionality via its type. From a 'pure' perspective, it would be fair for the compiler to remove that call entirely, since you do nothing with the result (since there is no result). Instead you need a monad or something to express that side effect.
Obviously, many languages don't choose to express side effects within their type systems. It's not that means the language isn't useful, just that its type system isn't complete; you have things happening without a type attached to them (and in that way it's dynamically typed).
The problem is more that you cannot formally express your type system in such a case. The "pure" perspective you mentioned is basically the limited view that one can still work with when defining a type system. Side effects are then basically magic functions that have any necessary type (in fact, OCaml has such a function in its stdlib, called "magic", if I am not mistaken).
What a narrow-minded view. The problem is not that null exists, it's that many statically-typed languages don't let you distinguish nullable from non-nullable variables. In the ones that do, null is functionally equivalent to Maybe/Option.
And in languages that lack static types altogether, it's a moot point.
null in a language with stacktraces is not the end of the world. Null is a value and compared to other erroneous values it's much easier to find and fix. Often it is not erroneous, it's extra information that you can use to make a decision.
I really don't get it because in many years of programming, in practice, null pointers are one of the least of my worries.