Hacker News new | past | comments | ask | show | jobs | submit login
Using Lua as a Serialization Format (mkhan45.github.io)
115 points by fish45 on June 18, 2020 | hide | past | favorite | 94 comments



Fun fact: Lua started its life as a language for representing data like this. At first it only had lists and hash tables but over time it gained support for things like if statements and loops, to be able to describe more interesting data. Eventually it morphed into the Turing complete programming language that we know today :)

https://www.lua.org/history.html

The current version of Lua still has some traces of that origin. For example, to keep RAM use manageable when parsing very large files the parser directly emits bytecode in a single pass, without building an intermediate abstract syntax tree.


Ironically, your username is one of the neatest Lua[JIT] projects I've seen. https://github.com/malkia/ufo


Unspoken HN rule #27: There's a 75% chance this isn't a coincidence.


Though the person who wrote that library has a different HN account: https://news.ycombinator.com/item?id=1643658


This isn't an argument for putting a Turing-complete language in your config file parser. It's an argument for documenting your config file format so your users can write scripts (in a language of their choice) to generate config files programmatically.

It seems cute to do it this way, but now you have to start talking about removing certain functions from the standard library, running the script in a sandbox, etc.


Lua has features to do just that. It is fairly easy for an app to load a protected environment around Lua source code and only expose what you need. You can clamp all forms of filesystem and hardware access and leave the user with only the functions you provide and the language itself.


I've been working with Lua on a project and have found this to be a really neat concept.

When you set up the runtime in your language most libraries make it trivial to define that a given function can call into your own code, i.e. I could define 'getUser' in my lua code to call back to a C# function with variables, etc...


What is your opinion on config files used by the Bazel build system, written in a programming language called Skylark [1] (much like Python)? I haven't looked into it but it seems plausibly Turing-complete, or at least powerful enough to cause problems in the same way a Turing-complete language would.

[1]: https://docs.bazel.build/versions/3.3.0/skylark/language.htm...


For the record, Starlark has some restrictions: no infinite loops, no access to the world, etc.

It's true that computations could be arbitrary long (as with most other languages). Bazel has a flag to reduce this kind of problem (--max_computation_steps).


It's fine if you don't consider the config files a threat vector.


Or use jsonnet or something like that.


About 20 years ago, I used Tcl as object serialization format in a game engine, and then made the next mistake to allow injecting manually written 'scriptlets' into this serialization format (after all, serialized objects were just Tcl scripts).

These two things combined were probably the biggest software design mistake I ever made. We still have to drag a stripped down Tcl interpreter along in our asset pipeline 20 years later because it's nearly impossible to switch away (because of the manually written 'scriptlets' lurking everywhere in asset source files - of course it would be possible, but it was never on the right side of the cost-benefit equation).

Never mix code and data, folks :)


Despite having an additional dependency (Tcl; but is this really such a big dependency?), can you maybe share other experiences of why this was a bad idea?

The article actually suggest pro doing this, i.e. using a script engine (Lua here) to store your configs/settings/data, and I agree, it can make certain things much simpler, and more clean.


The main problem is allowing "free scripting", because those script snippets are hard to automatically convert to something else. But if you don't allow or need that, why have the runtime overhead (in performance and size) of a scripting engine for loading data at all. But even when the scripting engine has already been integrated for other things it's still a bad idea to use it for data files, unless you forbid any sort of human scripting (so basically like JSON vs Javascript).

If you have pure data file formats, like JSON, XML or even your own custom binary formats, it's trivial to upgrade or change them (such as switching from XML to JSON to your own binary format and back).

But once you have human-scripted parts such an automatic conversion suddenly gets much harder or impossible, since now you need to transpile from one scripting language to another.

That's pretty much the worst case, but the problems already start when APIs or data layouts change. If a data item goes away, it's trivial to automatically inspect a pure data representation like JSON and remove or ignore such obsolete data items. But with free scripting, you suddenly get an obscure scripting runtime error.

And finally, if you want to process such mixed data / code files in another tool, you always need to integrate that specific scripting engine, which is a much bigger dependency than (for instance) a simple JSON or XML reader/writer, or even just your own binary file format readers/writers.

For instance the Emscripten SDK's .emscripten config file suffers from this: at a glance this looks like a simple text file with key/value string pairs, but on closer inspection it must actually be evaluated through a python interpreter to be any useful. So what could be a simple text parsing tool to extract filesystem paths from this config file suddenly requires a full blown python environment.

So basically, by allowing "turing complete" config or data file formats, you make future changes much harder then they need to be.

Of course all of this isn't all that important for small throwaway projects. But sometimes those small projects (or quick'n'dirty solutions in general) become much more important and longer-lived than expected.


As someone who's had a love-hate thing with domain-specific languages - reading your story was a good learning experience.

There's great value in sticking with "inert" data formats like XML or JSON, which have standard parsers in most languages, developed over years.

Once data crosses over into code, there's a whole host of potential issues to consider, including the requirement of an interpreter/compiler and a sandboxed environment, with all that entails. The Emscripten config needing Python is a perfect example.

Another example I thought of, is Markdown and MDX. The latter allows the use of JavaScript and React components mixed with markup, for dynamic content. It's brilliant in its own way, but I'm wary of using it as a document format precisely because it's no longer just data. Documents in MDX can't be consumed by other languages, and will always require the full JS/React environment.

Tcl actually feels like it could reasonable to use for data, due to its ubiquity/longevity. Lua, or better yet some kind of Lisp, also seem attractive as a serialization format. The interpreter could be small and stable, and the latter at least might even be simpler to parse than XML or JSON.

> We still have to drag a stripped down Tcl interpreter along in our asset pipeline 20 years later because it's nearly impossible to switch away

This makes me see how the decision created a long-term dependency to a language, where the data was no longer in a portable format. Good to keep this in mind next time I'm tempted to allow data to become code.


a stripped down tcl interpreter is one of the tiniest and less harmful dependendencies you can have nowadays. I do not see a problem with that.

And code is data, and has always been.


If you follow the mantra of 'code is data' then you have arbitrary and difficult to debug execution anywhere you have data. Mixing the two intentionally is a blunder of creating software that seems to be repeated over and over, maybe because people keep encouraging it without realizing the cost in non trivial projects.


Until someone sends serialized code-data like:

    for i = 1, 1000000 do
        for j = 1, 1000000 do
            for k = 1, 1000000 do
                for l = 1, 1000000 do
                    ...
Once you move your publicly injectable data format from declarative to something else, you're in for a world of hurt.


Tcl has had sandboxed interpreters with resource limits for a long time [1]. You can even remove built-in commands (for/while etc) if you want to. You can limit the language to a purely declarative subset and selectively expose commands and resources as you wish.

[1]: https://www.tcl.tk/man/tcl8.5/TclCmd/interp.htm#M46


If you use Lua only for your configuration data, patching the Lua interpreter to limit the CPU time is rather easy: Increment a counter within your lua_State struct here: https://github.com/lua/lua/blob/c33b1728aeb7dfeec4013562660e..., then abort once you reach a limit.


Author is surprised by convenience of describing data in Lua. I think he's in for another surprise if he re-implements simulation on Lua side. The LuaJIT is so fast, I can just code the most naive implementation and never go back to refining it. It's the simplest fast language and to me optimal choice for fully utilizing a single CPU core.


well I've been super impressed by luajit in the past but given that the simulation is pretty highly multithreaded I don't think a Lua version would do so well in comparison


It's trivial to have multiple Lua states running in different threads in the same process. Messaging and data exchange between those states is easy and LuaJIT gives you a lot of tools to make it painless.


Lisp programmers realized in the 70s that having an artificial division between code and data was frequently problematic, hopefully other languages continue to catch on as well :)


On the other hand, it's often useful to have data and configuration files that aren't Turing complete, because then you can safely make assumptions about how much time and space is required to load/parse them.

Unfortunately every configuration file format these days seems to add so many features that it inevitably becomes Turing-complete... the trick is finding the balance between expressiveness and halting decidability.


Lisp data aren't Turing complete, only a particular interpretation of them (i.e. EVAL), as you know. You're perfectly welcome to call READ on a file and you will get unevaluated lisp data back (particularly when you set READ-EVAL to NIL in common lisp).


READ is Turing complete, because the reader can execute arbitrary code: the #. read macro alone is enough to ensure this.


If you set read-eval to false, it will not execute reader macro. [1]

[1] http://www.lispworks.com/documentation/HyperSpec/Body/v_rd_e...


Yeah, but I can use SET-MACRO-CHARACTER to define a version that ignores that variable: basically, you should never use CL:READ on untrusted input, unless you control the readtable.


A non-Turing-complete language can still express programs that require exponential time and space, so is that really enough to provide useful guarantees? And would dynamic limits on resource usage not be sufficient, since the "cannot load configuration"-error would occur at runtime anyway?


You might like to look into Dhall - a config language that has been careful to remain non-Turing-complete.


Wow, cool, that is a really nifty language!

I don't care for the syntax (at all) but I am seriously impressed by the careful design. I've always wondered where you'd need to draw the line between plaintext config files and executable code, and Dhall is much further into "programming language" territory than I would've expected.

*Dhall homepage for anyone else that's interested - https://dhall-lang.org


Why is configuration performance so important to the application?

Sounds a bit like a missing specification for the distinction of load-time and run-time configuration.

Configuration, to me, is something that happens once and not often, and is therefore "something = foo". Update is a run-time thing, and therefore "if something then bar".

Lua covers both of those cases superlatively. Case A, byte code. Case B, same. Just be sure you have "generate byte code" in the right place, C, too ..


Let's say you want the user of your service to be able to write and upload a configuration file, but you specifically don't want them to be able to execute arbitrary code on your server(s).

It would be nice to be able to statically verify that their submitted configuration "behaves nicely", but the moment you introduce any kind of Turing-complete configuration language all guarantees go out the window because the problem becomes literally undecidable. [And then you try to mitigate it by imposing limits like "kill the parse process if it takes longer than X seconds or uses more than Y megabytes of memory".]

The advantage of, say, a simple .INI file for configuration purposes is that the time required to parse it (and storage space required to remember what's in it) is O(n) where n=size of config file...

EDIT: so take the example Lua file from the original post but change the loop limits from "32" to "16777216":

  for i = 0, 16777216 do
     for j = 0, 1677216 do
        add_body({x = 20 * i, y = 20 * j, mass = -0.1, rad = 2})
     end
  end
Congrats! Your 128 byte long configuration file now requires 2^48 iterations of the inner loop to parse and generates 2^48 instances of the "body" object. That's an expansion factor of at least 2^41 generated bytes for every byte of input, which is a shockingly bad Denial-of-Service attack against whatever is responsible for parsing the evil config file.


Proper limits are needed regardless of how the config is loaded. In this example I would expect add_body to fail after adding some number of objects well before the loop limit is encountered.

More troubling are loops which consume CPU without hitting memory limits. I would hope Lua provides a way to limit the number of instructions when evaluating expressions. If not then delegating reading the config to a limited subprocess could work at the cost of still more complexity.


I hadn't added this before I posted here but Lua does have a way to limit vm instructions so I've added a pretty conservative limit


This is hardly a failing of Lua-morphing-from-config-to-turing language, as it is missing sanitation in the customer interaction workflow.

Like, I get that there are services that require this level of trust to the end user, but why wouldn't I solve this problem by timing the config load and immediately stopping any config process that takes longer than it should? Its the 21st century, we can still use interrupts. ;)


The basic conundrum, as I was just explaining to a friend, is that as you drop syntax and validation, you gain more possible expressible meanings. A machine code instruction stream, concatenative syntax, and S-expressions all sit in very similar territory in that they are very close to a computed result, and in many cases will accept fairly arbitrary combinations of symbols. Concatenative is in one sweet spot since it deals well with compositions of partial expressions that have no particular "address", while s-exps are in another, where the meanings are bundled into fully addressable trees. Both Forth and Lisp make use of quoting to turn data into code and vice-versa, and introduce ways to assign persistent names and therefore reduce local context and allow reusage; their main difference is in whether they primarily serialize a figurative representation of data structure, or the step-by-step sequence of computing the structure.

And whichever you choose, that's great(I've gone on a huge concatenative binge myself), but then to define the application's final protocol, you have to filter it. You have to filter to turn many of these meanings into errors, which leads towards syntax, type-checking, and all the rest.

And to the extent that this is a problem, it mostly reflects the human and philosophical factors. What we believe in determines what meanings we wish to see represented in the protocol; our beliefs are many, varied and often contradictory, with entire organizations and systems of governance designed around reconciliation of these beliefs; therefore extensively structured protocols are necessary to define the "sometimes-errors" that result from our incoherence, our limited notion of reality. And that is producing the gradual ramp up towards complex languages, formats and protocols that address the impossible with the incomprehensible.

But data and protocols that are very truthful usually do not require such a complex structure, or that structure emerges from natural laws. They may be hard to compute, or hard to store - DNA sequences are very large, and modern cryptography is premised on computational difficulty - but the code that deals with these things is not large in expression, and only needs validation along some very particular axes. Regardless the task of discovering and implementing these things is hard, because evaluating the theory is hard, much harder than validating whether a stream of bytes represents a number, a string, or a list.


Security researchers on the other hand run away screaming when you tell them that all of your data and configuration files are a complete language and you execute the language to parse the files.


In the case of Lua, you can load that data and configuration Lua files in a protected environment with access only to the functions you provide. This is a built-in feature and pattern that many people who are integrating Lua in their app follow.


The problem is usually not Turing completeness but complexity, which invites bugs in the parser implementation. (The former is trivial to solve: add a timeout.)


> The problem is usually not Turing completeness [...]

You're technically correct (the best kind of correct), but jandrese wrote "a complete language", not "a Turing-complete language". You don't need Turing-completeness to create security problems – filesystem access alone does that.

> The former is trivial to solve: add a timeout.

That only solves the termination problem, which is a minor concern anyway. You don't need much time to compromise a system if you have filesystem access. No "bugs in the parser implementation" needed.


You can start a lua VM with an empty environment (basically no standard library).


The first thing any scripting-language-used-as-flexible-config-files will want to add is a way to include shared config files, typically with computed (i.e., non-constant) file names or paths. It's possible to sandbox this, but also rather complex compared to a non-executable data file format.


You're going to want the ability to pull in shared config files whether you're using a scripting language or not. And the way you need to sandbox those paths is roughly the same either way.


Why would only security researchers care about this? To everyone writing a service that accepts un-trusted (network, user) data this should be a very bad idea.


because anyone writing code that needs to accept untrusted data is defacto acting as a security "researcher"!


Depends on the type of security researcher -- some profit from it!


Depends on the kind of work it is. A janitor doesn't go around telling you to make a mess so they can remain employed…


Unless they're the type of janitor who "really cleans up" when called in to "do a job" at the office overnight.


I tried this on 1 or 2 small projects, years ago.

It was okay. It's cool that you can import data directly into Lua scripts, since Lua usually has poor library support for any other data format.

But I hit a wall with a limit on the number of tables in a file or something, that was specific to the Lua impl I was using. (Probably LuaJIT)

Sometimes I miss Lua. I think with only a few big breaking changes it could be a much better language.


I used Lua as a data exchange format for some embedded devices (software defined radios) because it played nicely with the C code and the Lua scripts we would run on the device (which served as a sort of interface bridge between hardware device drivers). The Lua binary can be compiled down to < 100kB and the language itself is fast enough for a scripting language to have been useful in this context (I don't know if that still holds today, I left embedded in 2014).

It was a really pleasant experience but we were not working with a very large number of tables. I miss Lua all of the time, too, and play around with it whenever the opportunity arises. It holds a really special place in my programmer heart, alongside lisp, C, and Ruby, for having shown me some really interesting things at the right time in my career. It's too bad it doesn't have more widespread adoption, I think a lot of people could benefit from using it for the things it's good at.


There are a number of places it's being used today. There's an nginx module with an entire web front end development environment (openresty), Cloudflare uses the nginx lua module heavily, you can develop cross platorm (ios, android, web) games in Solar2D.


The NodeMCU[1] environment for ESP8266/ESP32 microcontrollers is pretty neat too -- it targets a $2 microcontroller with wireless network access (and Bluetooth in ESP32), with builtin functions to make HTTP requests and interact with external devices.

[1]: https://www.nodemcu.com/


I'm aware, but I've never seen a shop I've consulted for or any of my employers use it. I've tried to get it in use a few times and ultimately it went no where because I would be the only programmer in a department with > 100 FTEs who would be able to work on it or maintain it unless some others took the time to learn it. It's been a non-starter as a result.

When I ask around, I don't hear about it in use locally (edit: that's actually not entirely true, Blizzard uses it and maybe some of the other local game shops, but I don't see it anywhere else locally). I met someone at a conference a year or two back who was using Kong. That's been it. Maybe I'm just missing it but I don't really see widespread use.


It never really saw adoption outside of games from what I've seen.

Its such a simple but powerful language, has one of the best interop apis with C and I love how they approached coroutines.

It's really a shame it doesn't see wider use. We used it to ship all our game logic on the PSP, 400kb prealloc block and it worked like a charm.


It's used in CHDK, the Canon custom Firmware Project for user definable scripts https://chdk.fandom.com/wiki/CHDK Besides this I've rarely encountered Lua in the wild

Edit: And of course in Magic Lantern, for Canon EOS DSLRs https://magiclantern.fm/


I built a digital signage solution for the Raspberry Pi based on Lua (https://info-beamer.com) which turned into a company. API documentation is on https://info-beamer.com/doc/info-beamer, while most source code for what's running on the screens can be found on https://github.com/info-beamer/. Couldn't have done it without Lua.


Kong[1] is basically a bunch of Lua scripts on top of OpenResty.

Redis[2] lets you write internal scripts in Lua.

Edit: Forgot Torch[3] which is great for generating nonsense with RNNs.

I think it's definitely getting more of a foothold in the world outside of games.

[1] https://konghq.com [2] https://redis.io/commands/eval [3] http://torch.ch


The are a few lua lapis users out there: https://leafo.net/lapis/


I'd used lua in my early teens to script some games and ended up receiving money for my efforts, so I put it on my resume coming out of university.

It turned out that one of the first companies (embedded, automation) I interviewed with used it for integration between various third party subsystems. I think those conversations helped me feel like less of an imposter at the time.

I never use it anymore, but the language is nice. I wonder what web development world we'd live in now if they'd embedded lua instead of creating JavaScript.


> It's too bad it doesn't have more widespread adoption, I think a lot of people could benefit from using it for the things it's good at.

I think that's because there aren't many things it's really good at. It excells at a niche – as a light-weight, easily embeddable scripting language – but is mediocre as a general-purpose programming language, because it's standard library is extremely spartan and lacks so many things you need on a day-to-day basis.


I disagree, there are many things it is great at.

The language is well designed, with orthogonal features that complement each other. Coroutines and meta-tables are easy to understand and still powerful way to organize and control code base.

You can actually learn it in a week, and hold all of the language in your head. A very valuable thing in my opinion. The code is very readable to beginners (unlike Perl, Haskell or APL).

The interpreter is incredibly fast. Even non-JIT implementation is surprisingly fast.

The LuaJIT FFI is so convenient that the difference between C/C++ libraries and Lua libraries almost disappear.

There are cross-platform hosting frameworks that allow you to write and execute dynamic interpreted code on otherwise locked down platforms (Android, Switch, TVs...).

I would say Lua lacks tooling and better organization of community. Biggest Lua communities were built around products that embed Lua like WoW, Roblox, Corona (now Solar2D).


I use Lua at work to process incoming SIP messages (using LPEG [1]). It's handling million of calls per day without issue.

[1] Lua Parsing Expression Grammar http://www.inf.puc-rio.br/~roberto/lpeg/


Would you mind iterating on how it could be improved and how these changes relate to the context in which you've worked?

I've used it a bit for simple game development(Love2D). It's an okay language but it seems only suitable for building anything beyond small systems. No good IDE support and needing to emulate OOP was a bit annoying to me.


Try ZeroBrane, a pretty good IDE for Lua with debugger.

As for object-orientation it takes Go or Rust-like approach where there isn't an explicit class construct but conventions; you can build your own or use libraries to get there. In fact Rust or Go are younger so Lua should be appreciated for doing this a lot earlier. Having rigid object-oriented-ness will generally make the language less usable with other paradigms like functional, imperative, generic, etc.


OO in Lua is the same as OO in classic JavaScript, just slightly more manual. In fact, in lots of ways Lua can be viewed as a cleaned up, simplified JS.


I often use a mix of moonscript [0] and Lua, because the Lua that moonscript generates for classes takes away a heap of the overhead when dealing with a whole heap of stateful things that you expect to run across for game dev.

[0] https://moonscript.org/


You might have run into the problem of LuaJIT only allowing 2^16 constants per function.

http://lua-users.org/lists/lua-l/2010-03/msg00238.html


What breaking changes would you like? I have a big collection of desired breaking changes that I plan to actually implement sometime, but I'll try not to bias your reply by listing them :)


Ah, I know exactly what limit you're talking about. I ran into it too.

You can't have more than N local variables, I believe. The problem goes away if you put them into a top-level table.


My game, Shadow Physics used Lua as a serialization format as a side effect of the game initially having no level editor or level import capabilities, only Lua scripts to create levels procedurally. Copy and paste was also implemented using Lua, a fun side effect of this you could copy and paste arbitrary Lua code as a live coding technique.


My hobby project, a space game (Space Nerds in Space) allows for Lua scripting, and it will allow exporting (most of) the state of the game to a lua script which, when run, restores that state. Ironically, the state of potentially running Lua scripts is the main thing stopping me from being able to checkpoint the entire state of the game.


I solved the state for a Lua project of mine by replaying all external input and making the game itself only depend on that input. Of course that's not viable in most cases, but helped by built https://geolua.com. You can restore a game from 6 years ago almost instantly.


OP if you're the author, FYI first link to previous post ('this is a follow-up to') is broken:

https://mkhan45.github.io/2020/06/12/Lua-integration.html

Edit: Ah, 'Lua' should be lower-case.


thanks, I've fixed it


This is cool. Props to the author for thinking in original ways. Most people would have reached for a static solution like templating markup or a serialization format like YAML. Using a real programming language is way better.


A warning: Don't think that this is a good idea. It's small, it's fast, it's elegant. But it's totally insecure.

Everybody who had access to this format can take it over. It can do far too much, that's why everybody uses JSON. It can only represent hashes, arrays, and some limited primitives. No links, no logic, no lib. But lua is far too powerful. Game over.

Lisp would be even better, btw. Much easier to parse, and much easier to avoid eval. But his idea is eval.


There are a lot of cases were the input source is trusted.

The author doesn't mention it explicitly, but I think that's his case.

For his situation, I think it's a good use of the tool.


on reading some of the comments here I've added some more security, but you're right that I'm not worried about security. The limitations I added should prevent your average prankster from sending an infinite loop to their teacher, but I don't think it would stop anyone more dedicated


With Lua you can create environments and define exactly what functions and libraries that environment has access to, and then execute/load your configuration or data file into it. This way it can't have access to i/o or whatever you want to prevent it from accessing. This is a built-in feature since this is a very common use case for Lua.

Lua is not far too powerful, Lua is as powerful as you want it to be.


These are all valid points. But as OP's software is educational and runs locally (I guess), what would be the attack vector?


You can always sandbox the process if you don't trust the input. Ruby has taint checking built-in and most operating systems have lightweight sandboxing solutions. Docker is another way to go. You can just execute the script in a container and then grab standard output as the configuration.


A much simpler thing to do is to simply disable all of the standard library functions which allow i/o. Lua has functionality specifically intended for limiting the environment available to a script, and the standard library is small enough that doing this is actually practical.


I remember playing around with Lua but didn't look too closely at sandboxing facilities but what you say makes sense.


> You can just execute the script in a container and then grab standard output as the configuration.

In what format, that you then have to parse?


Yes. If you're passing it to another process then you would need to parse it. If you were using it in the same process and not crossing sandbox boundaries then you could just pass around whatever data structure was generated without serializing it into another format.


barf barf barf aaaaand barf.


Can you please ellaborate more on why Lua is way better than JSON ? We use JSON a lot for configs and data exchange, cannot see how can we gain from switching to Lua. One of my developers suggested Lua, so I'm curious.


Some Lua features can make it more pleasant to write the config files by hand. For example, comments (!) and local variables.

More powerful features can also be helpful if you have a large config file that would typically need to be generated by a separate program. One example of this is shown in the linked blog post, when they use a for loop to initialize a large array that has many similar objects.

It can also help if the config file starts gaining special "PRAGMA" fields over the years. There is a certain tendency for configuration files to slowly evolve into an ad-hoc and poorly implemented scripting language and in those cases, switching to a proper programming language cuts the problem at the root.

That said, for data exchange situations JSON has the advantage that it is more well-behaved in the case of malicious inputs.


Not GP, but some typical gripes about json are that it does not allow comments or trailing commas. TFA is about adding loops to config files to generate big redundant configs, which may not be something you want.


I used Tcl for that in the past. Allows for some pretty powerful configuration.

For pure "serialization", this is a bit overkill, of course.


I feel like the ron format is just too verbose, and JSON would be comparable to your Lua example. As long as you don't serialize stuff like polymorphic classes (and you don't have those in Rust anyway), you don't need to put the name of every object into the serialization file.


I somewhat agree, but I feel that being able to do math/logic with Lua makes it superior for this use case since I'm not worried about security




Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: