Hacker News new | past | comments | ask | show | jobs | submit login

On the game I'm currently working on, it's built very heavily around Lua. So for the save system, we simply fill a large Lua table, and then write that to disk, as Lua code. The 'save' file then simply becomes a Lua file that can be read directly into Lua.

This is absolutely amazing for debugging purposes. Also you never have to worry about corrupt save files or anything of it's ilk. Development is easier, diagnosing problems is easier, and using a programmatic data structure on the backend means that you can pretty much keep things clean and forward compatible with ease.

(Oh, also being able to debug by altering the save file in any way you want is a godsend).




You probably know this, but remember that storing user data as code is a place where you (general "you") have to think very carefully about security.

Is there any way that arbitrary code in the file could compromise the user's system? If so, does the user know to treat these data files as executables? Is there any way someone untrusted could ever edit the file without the user's knowledge? Even in combination with other programs the user might be running? Are you sure about all of that?

Maybe Lua in particular is sandboxed so that's not a problem (beats me), but in general this is an area where safe high-level languages can all of a sudden turn dangerous. Personally I would rarely find it worth it.


This is a good point, but I feel that discouraging this type of approach is not the way to go.

I apologise in advance for ranting... I hope this is not too off-topic, but instead a "zoom out" on the issue.

This touches on something deep and wrong about how we use computers these days. Computers are really good at being computers, and the amplification of intellectual capabilities they afford is tremendous, but this is reserved for a limited few that were persistent enough and learned enough to rediscover the raw computer buried underneath, and what it can do.

For example, I dream of a world where everything communicates through s-expressions, all code is data and all data is code. Everything understandable all the way down. Imagine what people from all fields could create with this level of plug-ability and inter-operability. We had a whiff of that with the web so far, but it could be so much more powerful, so much simpler, so much more elegant. All the computer science is there, it's just a social problem.

I understand the security issues, but surely limiting the potential of computers is not the solution. There has to be a better way.


Lack of Turing-completeness can be a feature. Take PDF vs PostScript. The latter is Turing-complete and therefore you cannot jump to an arbitrary page or even know how many pages the document has without running the entire thing first.

By limiting expressiveness you also gain static analysis and predictability. It's not about limiting the potential of computers, it's about designing systems that strike the right balance between the power given to the payload and the guarantees offered to the container/receiver.

For example, it is only because JSON is flat data and not executable that web pages can reasonably call JSON APIs from third parties. There really is no "better way" -- if JSON was executable then calling such an API would literally be giving it full control of your app and of the user's computer.


If you have a nice data format like s-exprs, it's a fairly simple matter to just aggressively reject any code/data that can't be proven harmless. For example, if you're loading saved game data, just verify that the table contains only tables with primitive data; if there's anything else, throw an error. Then you can safely execute it in a turing-complete environment and be sure it won't cause problems.

Speaking for myself, in my ideal world this sort of schema-checking and executing is ubiquitous and easy. Obviously that's not the world today. While there are tools for checking JSON schemata there doesn't seem to be a standard format. I wonder how hard it would be to implement a Lua schema-checker.


Have you checked out EDN yet (https://github.com/edn-format/edn)?

It's a relatively new data format designed by Rich Hickey that has versioning and backward-compatibility baked in from the start.

EDN stands for "Extensible Data Notation". It has an extensible type system that enables you to define custom types on top of its built-in primitives, and there's no schema.

To define a type, you simply use a custom prefix/tag inline:

  #wolf/pack {:alpha "Greybeard" :betas ["Frostpaw" "Blackwind" "Bloodjaw"]}
While you can register custom handlers for specific tags, properly implemented readers can read unknown types without requiring custom extensions.

The motivating use case behind EDN was enabling the exchange of native data structures between Clojure and ClojureScript, but it's not Clojure specific -- implementations are starting to pop up in a growing number of languages (https://github.com/edn-format/edn/wiki/Implementations).

Here's the InfoQ video and a few threads from when it was announced:

https://news.ycombinator.com/item?id=4487462, https://groups.google.com/forum/#!topic/clojure/aRUEIlAHguU, http://www.infoq.com/interviews/hickey-clojure-reader


I've looked at EDN a bit, even started a sad little C# parser. I don't see what it has to do with my previous comment, which is all about how schemas are potentially useful. I'm trying to say that after you check the schema, you don't just read the data, you execute it, and that has the effect of applying the configuration or just constructing the object.


>There really is no "better way" -- if JSON was executable then calling such an API would literally be giving it full control of your app and of the user's computer.

Of course there's a "better way": running the code in a sandbox. You could do so using js.js[1], for example. (Of course, replacing a JSON API with sandboxed JS code is likely to be a bad idea. But it is possible.)

[1] https://sns.cs.princeton.edu/2012/04/javascript-in-javascrip...


You're right inasmuch as I shouldn't have implied that unsandboxed interpretation is the only option.

But my larger point still stands; the fundamental tradeoff is still "power of the payload" vs "guarantees to the container." Even in the case of sandboxed execution, the container loses two important guarantees compared with non-executable data formats like JSON:

1. I can know a priori roughly how much CPU I will spend evaluating this payload.

2. I can know that the payload halts.

This is why, for example, the D language in DTrace is intentionally not Turing-complete.


I agree 100% with you, but #1 isn't completely true. The counterexample is the ZIP bomb (http://en.wikipedia.org/wiki/Zip_bomb) Whenever you unzip anything you got from outside, you should limit the time spent and the amount of memory written.


If excess CPU/non-halting behavior is the issue, you could run the code with a timeout.


You could. But that has downsides also:

1. imposing CPU limits incurs an inherent CPU overhead and code complexity.

2. if those limits are hit, you can't tell whether the code just ran too long or whether it was in an infinite loop.

So now if we fully evaluate the options, the choice is between:

1. A purely data language like JSON: simple to implement, fast to parse, decoder can skip over parts it doesn't want, etc.

2. A Turing-complete data format: have to implement sandboxing and CPU limits (both far trickier security attack surfaces), have configure CPU limits, when CPU limits are exceeded the user doesn't know whether the code was in an infinite loop or not, maybe have to re-configure CPU limits.

Sure, sometimes all the work involved in (2) is worth it, that's why we have JavaScript in web browsers after all. But a Turing-complete version of JSON would never have taken off like JSON did for APIs, because it would be far more difficult and perilous to implement.


I have to agree here. General Turing-completeness was known from the beginning to imply undecidable questions -- about it's structure, running time, memory and so on. I don't think this has a place as the 'data'.

Abstractions exist for a reason -- this is analogous to source/channel coding separation or internet layers. They don't have to be that way, but are there for a reason.

Someone could change my opinion, though. Provide me a data format which proves certain things about it's behavior and that would be a nice counterexample.



Join me on my crusade to eliminate the use of 'former' and 'latter' in any writing unless the goal is obfuscation. It's error prone, almost always requires rereading, and never is the clearest choice.

Here's my attempt at a clearer version:

"Take Postscript vs PDF. Postscript is Turing-complete and therefore you cannot jump to an arbitrary page or even know how many pages the document has without running the entire thing first."


That's like campaigning to eliminate pronouns. "Former" and "latter" are just like "it", except they are for when you mentioned two things.

Repeating the proper noun doesn't achieve the goal of emphasizing the fact that you're referring to something you just mentioned.


Pronouns are fine. Substituting 'the first' and 'the second' would be an improvement. It's specifically 'former' and 'latter' that should be deprecated. I'd be interested in seeing a study comparing readers' comprehensions of the various phrasings. What cost in clarity would you be willing to pay?


Did you even read his comment? That is precisely what he said!

"Take PDF vs PostScript. The latter is Turing-complete"


I think that is what haberman meant.

Back on topic: The reason for PDF's existence is to be a non-turing complete subset of postscript. Features like direct indexing to a page are why Linux has switched to PDF as the primary interchange format.


In a world where users will willingly enter malicious code into their computers if they believe it will do something they want[1], can there really be a better way?

[1] https://www.facebook.com/selfxss


When it comes right down to it, you can't fully protect people from themselves. Even in 'meat space', which the general population is presumably experienced with, people talk others into doing things that they should not all the time. Anything from social engineering to bog-standard scam artists masquerading as door-to-door salesmen.


But in 'meat space', it is way harder and more expensive to do evil against large numbers of people. For example, phishing as done electronically (throw a very, very wide net, and hope) doesn't make economical sense if one had to do it manually.

Also, if, for example, cars and airplanes and banks and nuclear submarines would accept executable code as input, some people would do damage on a gargantuan scale.

Clearly, being liberal in what you accept must end somewhere. I argue that it should end very, very soon. Even innocuous things such as "let's allow everybody to read the subject of everyone's mail messages", if available at scale and cheaply, would entice criminal behavior, for example by those mining them for information that you are away from home.

Does anybody know how RMS thinks about passwords nowadays?


I'm not saying that scams in "cyberspace" don't present a greater threat than scams in "meatspace". I'm just pointing out that if you cannot protect people from themselves in "meatspace", then doing it in "cyberspace" is futile. You can fight it and cut back on it, but you will never actually win that fight. Technological problems to what is ultimately a sociological problem only go so far, and we should be careful to not obsess over them to a fault.


And yet, in "meatspace", chainsaws come with more safety mechanisms than butter knives because the damage they can do is so much larger. Yes, we can't win that fight; people will die from chainsaw accidents, but I disagree that we shouldn't be more vigilant about chainsaws than about butter knives.


This seems like exactly the problem the parent post is complaining about, though: the people in the limited group the parent talks about aren't the people being tricked by selfxss, it's the people who don't have the technical knowledge to understand what the developer console does and why pasting in random JS might be a bad idea. So the phenomenon of selfxss reinforces the point.


Perhaps modern operating systems (or hardware?) need two modes - "Safe mode", where everything is sanitised, checked, limited and Secure Boot-style verified, and "Open mode" where it's not; where experts and enthusiasts can work without limit and without DRM.


That only adds an extra step to the social engineering process.


Safe mode is called iOS


When my conputer has the potential to erase my bank account, I want to limit its potential.


You're discouraging it in the wrong place.

Learning to swim is not done by throwing a kid in the deep end of a pool. Learning to code is not done by encouraging bad security practices.


On the other hand, taking the easy way out when this kind of security problem comes up leads to having a machine that's just an appliance and not a computer. If you've closed every local code execution vulnerability, you've probably rendered your system completely non-programmable and erected a monumental barrier to learning how to hack.


Lua sandboxing is relatively straightfoward. You can choose what functinos from the standard library the script you are evaluating will see in its global scope. By passing an empty scope the only thing the evaluated script can do is build tables, concatenate strings, do arithmetic, etc. You only need to worry about DOS due to infinite loops but there are also workarounds for that).

In Loa 5.1 you can use setfenv http://www.lua.org/manual/5.1/manual.html#pdf-setfenv

And in Lua 5.2 the functions that eval strings receive the global scope as an optional parameter. http://www.lua.org/manual/5.2/manual.html#pdf-loadfile


I trust Lua sandboxing. See, e.g.:

1. http://stackoverflow.com/questions/1224708/how-can-i-create-...

2. http://stackoverflow.com/questions/4134114/capabilities-for-...

I find it easier to trust Lua than similar facilities in other programming languages because the kernel of the language has a relatively simple semantics, so the TCB of a sandbox is lower, and the source is easier to understand than most other languages.

Note that sandboxing in Lua 5.2 has a still simpler semantics than for Lua 5.1 - few other languages evolve in a way that makes the language easier to trust.


It's the halting problem - all it takes for someone to embed in your data code that loops forever (wait infinite), or recurses (crash), or discover some vulnerability... Not directly related to saving files as lua, but say as bytecode: https://www.youtube.com/watch?v=OSMOTDLrBCQ


Lua can be sandboxed so your data file can't call arbitrary functions (but can still call a controlled subset, e.g. a function called RGB that does r255255+g*255+b so your colors are somewhat human-readable in the file, yet 24-bit integers in memory).

But it's still code, so you can e.g. inject an infinite loop and the loader will hang. (You can protect against this, you can install a debug hook that gets called after every N instructions executed, and kill the loader.)



Typically sandboxing is stage one of any lua implementation. You don't need raw IO access and rarely need to print to the screen for instance.


Aware of all the issues and already have a plan. But Lua generally only has access to the APIs you give it; from game code our Lua VM has no access to the OS at all, just game functions, and those game functions are never system related.

The biggest 'concern' would be save hacking, but at the end of the day that will happen no matter what so it doesn't bother me much.


Preach it. My favorite persistence code: stuff that has nothing to do with SQL/NoSQL.

I leaned heavily on Python's pickle module for serializing a few thousand entities to disk a few years ago. By streaming them to the application at startup time, it remained plenty fast for all datasets it'd encounter. I intended to replace it with SQLite one day, but I never had to. I could just keep them all in memory.

I'd probably choose something a bit safer now, but it was hard to beat the simplicity.


I used to do that, but pickle bit me once. I think it changes between versioning or something. I had to start the statistical model from scratch.


Now that you mention it, I remember running into something like it. It's a big issue. That module had a bunch of scar code to migrate entities as they came up from disk.


Yeah, I wouldn't use pickle for anything where backwards or forwards compatibility is important. It is however very convenient for 'I want to send/store this data for a bit' tasks.


Why does using a Lua-basef format stop the files being corrupted?


Maybe he means that he doesn't have to deal with bugs in a custom binary serializer.


It can only become corrupted by external factors; a lot of games I've worked on, in-game bugs could lead to corrupted saves being written out to disk. Since in this case we are just serializing lua data, unless the serializer itself has a bug, it will always write out correctly, and any issues become issues of game logic rather than anything else.


I don't think it does. I think he meant that if a save became corrupted it wouldn't do so silently, it would violently crash the game because of a syntax error.


It doesn't. But it makes them much easier to fix.

Edit: Igglyboo has a point too.


I did this with C# in my last game. All the map/object editors output C# code on save, which was then included in the compiled code on the next build. The beauty is that your "data" files get automatically updated when you refactor your regular code! On top of that loading is faster, because you don't need to worry about fetching a file and parsing it, the whole thing is just compiled code embedded in your executable.


That Lua was originally designed as a configuration language becomes really clear when you start doing things like this. Having my code and configuration being separate but equal was really a paradigm shift for me.

Also, the Tiled Map Editor exports directly to Lua.


IIRC that's how Office and Photoshop file format started. I think it's a nightmare for compatibility in the end.


So, it's similar to JSON (JavaScript), but valid Lua syntax.

  local t = {}
  t = {["foo"] = "bar", [123] = 456}
  t.foo2 = "bar2"


While one could say that this is about savefiles for games, I would say it's implications could more be about savefiles for software projects. If you are building the game in LUA, of course LUA is going to be the preferred way to save your game in since you are already using LUA objects and interpreting files in that language will be easy to integrate.

If you ever used maven xml configs, java object marshalling or c# xml you would understand the pains of using xml as a file format for software projects and data representation. You have to find a solution that is language agnostic, neither LUA or JSON is.


I did something similar, but used JSON instead (pretty trivial to (de)/serialize LUA tables to JSON. This made it easy to send data to the server, and inspect with standard tools as well.


This sounds a lot like NSCoding for Objective-C (Cocoa). Though you'd still have to define the types/classes and name for each property you want to save. But you could technically save it in a big blob, and then read it into memory as you resume.

Could persist to disk as a binary, sql or a plist (xml).

I guess the only downside is, that if you got a lot of composite classes all with their own properties and associations (say a graph), there's a lot of manual work to be done.


I've had to write output save file formats for various projects on several occasions, and it never occurred to me to take this approach.

Thanks for sharing this, it's one of those ideas that (to me) seems so brilliant in its simplicity that I probably would've never thought of it.

Any hiccups in the day-to-day work using this approach? I'm just trying to get a better idea of the workflow since I'm very seriously considering applying it to my next project.


The biggest hiccup is almost a literal one; serializing large lua structures and then writing them to disk can take a lot of time. But this can largely be mitigated by just saving compiled lua instead of text lua.


That's how people are going to cheat at your game.


I had a lot more fun recently playing the free game Boson X for PC (http://www.boson-x.com/) than I would have otherwise because I discovered that the game folder contains editable Lua scripts. The scripts control the game physics, scoring system, controls, level data, and more.

I’ve created mods of the game where you fun faster but gravity is stronger, and where all levels are randomly mixed into one level, and where the dangerous falling platforms also give you energy while you’re on them, and where the sound effects give the player clearer feedback on what they’re doing. And though I could cheat by multiplying my score by 1000 and submitting it online, I actually have been careful to always comment out the high-score saving and submission code in each of my mods.

I like the game much more than if the developers had obfuscated the Lua files so I couldn’t read and edit them.


The save format does not matter at all. It wouldn't matter even if it were an obscure, made-up format. All it would do is slow down 'cheaters' by half an hour.

The only argument against human-editable text files is parsing speed, not security.


Data size has a bit to do with it. Not trying to be pedantic, just adding that.


Cheating is good, I remember having tons of fun with age of empires and sim city because I used cheat codes.

If the player has fun, it's a nice feature! :D


And what about the people competing against the happy cheater?


in a single player game with save games on local disk? this question is nearly trolling.


Some people do compete for speed or score in single player games. Arcade games have always had scoreboards, modern "arcade-style" games have online ones, and a community can turn any solitary activity into a competitive one:

http://speedrunslive.com/

http://speeddemosarchive.com/

Speedrunners are an exceptional case, but I think everyone gets a little annoyed when they look at a leaderboard and all the top players have scores of UINT_MAX or times of 0 seconds.

Obviously cheaters will find a way regardless of whether you give them the source code or not, I'm just saying dfc's concern is not totally ridiculous.


I DO get annoyed when I see those scores, but in a lot of cases even having a leaderboard is just something that was introduced in the game just to be more "social" and less because it makes sense in that specific game.

And yes, it's not ridiculous, on the contrary, it's perfectly understandable.

Of course, these kinds of questions depend a lot on the game in question, and I think they don't have a definitive answer :)


oh don't get me wrong, i love speedrunning and trickjumping competitions, except that they always should require the whole replay - and even then you can't be sure if the whole thing was or wasn't TASsed.


Of course, in multi-player competitive games anti-cheating is a pretty big concern, because it works against the purpose of the game: a competition with well defined rules and conditions.

If the core of the game is single-player/non-competitive, why should we be so worried about cheating?


Plot twist, It's actually a 'teach yourself lua game'


so level 38 is "figure out how to write the answer by editing the save file?"


Why not level 7? By 38 the player could be fixing bugged waypoints, profiling out bad O(n!) code, resetting time to get a specific time-based drop, tweaking character attributes to make puzzles/quests easier, etc. The savefile just becomes another interface to play with.

Gives a fresh angle on the 'open world' type game.


Reminds me of tweaking the Colonization game (1991?) by editing text files...


Does it matter unless the game is multiplayer, in which case you should assume that client files are untrustworthy anyways.


There are ways around it but if people want to cheat their own SP experience who am I to stop them? We'll obfuscate a bit to dissuade casual users but I don't know that I've ever encountered a game that didn't have some level of save hacking available.

Hell, I've used it myself more than a few times.


Hash the information and include the hash in the file. If the hash and the contents don't match when you try to load it, you can refuse it.

If not loading things is important to you, mind.


What's to stop people from re-hashing the changed file? :)


You could hash the file contents plus a salt that is contained within the application binary. Most people wouldn’t know how to extract the salt from the binary so they could get the hash right. Though I guess if people are determined enough to hack your game, one hacker might just publish the salt or a small program to rehash files for you. If a user has enough time on their hands, there’s nothing you can do to stop them from running your software in a VM with a debugger and finding out its secrets; the best you can do is making that hard enough that people won’t bother.


Lack of knowledge, lack of interest.

If you wanted more security, you could keep a secret that you don't include in the saves but do include when you calculate the hash, so that anyone who doesn't have the key is going to get the wrong answer. That's about as far as I'd consider going for relatively trivial data like save games. Though that's, in principle, discoverable if someone's sufficiently interested.

After that point, it becomes much simpler for someone to watch the memory associated with your program and extract/alter the values there. (Programs to do that to games are generally called Trainers.) That's not a complicated thing to do unless someone's tried to stop you doing it.

There are some techniques to provide some degree of security there. Changing where in memory you place your information each time springs to mind, thus making it more difficult for people to find out where the values are and then share their locations. However, even that's not perfect. Depending on how sure you want to be that no-one's going to alter the values, you're potentially looking at requiring very deep knowledge of security there.

After that point the next easiest target may be the program file itself.

That said, if you want to get around that sort of problem and you're really serious about it, then running your encryption in an environment that hostile may be making things more difficult than they need to be. You might use a trusted platform module, to try to make the environment you were in less hostile, if one were present on the user's machine. But, honestly, I'd want the information not to be stored or calculated on the user's machine if it were that valuable. Have the user's end be the input, encrypt their signals with your public key, and do the calculations that you needed to be sure of remotely.

Though then the user has to trust you. I wouldn't usually advocate that my users trust me that much - not unless we were dealing with a situation where the information we were talking about was entangled with others in some way such that a reasonable argument could be made that they didn't own it, and I was just the best common arbiter I could think of.

-she shrugs awkwardly-

You can get yourself into a situation where it's probable that the amount of effort someone would have to invest is vastly greater than the likely value of the information fairly easily. But ultimately it's a question of how expensive you want to make things and what that's worth to you. Against a sufficiently dedicated adversary, with a sufficiently valuable target, there are so many unknowns in computer security that I wouldn't even be sure that storing the data on your server would be sufficient ^^;


It's only cheating if the developer disapproves.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: