I have being using Trial[1] for the past few weeks to test out game development in Common Lisp, and have been having a great time. Being able to alter (almost) all aspects of your game while it's running is a blessing.
Lisp languages seem well-suited for building games. The ability to evaluate code interactively without recompilation is a huge deal for feature building, incremental development, and bug-fixing. Retaining application state between code changes seems like it would be incredibly useful. Common Lisp also appears to be a much faster language than I would have blindly assumed.
The main downside for me (in general, not just for game programming) is the clunkiness in using data structures - maps especially. But the tradeoff seems worth it.
One of the downsides is that implementations like SBCL have a deep integration and need things like a well performing GC implementation - to get this running on specialized game hardware is challenging. The article describes that. Getting over the hurdle of the low-level integration is difficult. The reward comes, when one gets to the point, where the rapid incremental development cycles of Common Lisp, even with connected devices, kicks in.
For the old historic Naughty Dog use case, it was a development system written in Common Lisp on an SGI and a C++ runtime with low-level Scheme code on the Playstation.
> Common Lisp also appears to be a much faster language than I would have blindly assumed.
There are two modes:
1) fast optimized code which allows for some low-level stuff to stay with Common Lisp
2) unoptimized, but natively compiled code, which enables safe (-> the runtime does not crash) interactive and incremental development -> this mode is where much of the software can run nowadays and which is still "fast enough" for many use cases
Except for occasionally using a small embedded Scheme in C++ when I worked at Angel Studios, I haven’t much experience using Lisp languages for games.
That said I have a question: is it a common pattern when using Lisp languages for games to use a flyweight object reuse pattern? This would minimize the need for GC.
If that's your main downside, that's pretty good, since clunkiness is in many ways fixable. Personally with standard CL I like to use property lists with keywords, so a "map literal" is just (list :a 3 :b 'other). It's fine when the map is small. The getter is just getf, setting is the usual setf around the getter. There's a cute way to loop by #'cddr for a key-and-value loop, though Alexandria (a very common utility library) has some useful utils for looping/removing/converting plists as well.
If typing out "(list ...)" is annoying, it's a few lines of code to let you type {:a 3 :b 4} instead, like Clojure. And the result of that can be a plist, or a hash table, or again like Clojure one of the handful of immutable map structures available. You can also easily make the native hash tables print themselves out with the curly bracket syntax.
(On the speed front, you might be amused by https://renato.athaydes.com/posts/how-to-write-slow-rust-cod... But separately, when you want to speed up Lisp (with SBCL) even more than default, it's rather fun to be able to run disassemble on your function and see what it's doing at the assembly level, and turn up optimization hints and have the compiler start telling you (even on the individual function level) about where it has to use e.g. generic addition instead of a faster assembly instruction because it can't prove type info and you'll have to tell it/fix your code. It can tell you about dead code it removed. You can define stack-allocation if needed. Simple benchmarking that also includes processor cycles and memory allocated is available immediately with the built-in time macro...)
Things have costs, what's your underlying point? That one shouldn't create such a macro, even if it's a one-liner, because of unquantified costs or concerns...?
Singling out individual macros for "cost" analysis this way is very weird to me. I disagree entirely. Everything has costs, not just macros, and if you're doing an analysis you need to include the costs of not having the thing (i.e. the benefits of having it). Anyway whether it's a reader macro, compiler macro, or normal function, lines of code is actually a great proxy measure to all sorts of things, even if it can be an abused measure. When compared to other more complex metrics like McCabe's cyclomatic complexity, or Halstead’s Software Science metrics (which uses redundancy of variable names to try and quantify something like clarity and debuggability), the correlations with simple lines of code are high. (See for instance https://www.oreilly.com/library/view/making-software/9780596... which you can find a full pdf of in the usual places.) But the correlations aren't 1, and indeed there's an important caveat against making programs too short. Though a value you didn't mention which I think can factor into cost is one of "power", where shorter programs (and languages that enable them) are generally seen as more powerful, at least for that particular area of expression. Shorter programs is one of the benefits of higher level languages. And besides power, I do think fewer lines of code most often corresponds to superior clarity and debuggability (and of course fewer bugs overall, as other studies will tell you), even if code golfing can take it too far.
I wouldn't put much value in any cost due to a lack of adoption, because as soon as you do that, you've given yourself a nice argument to drop Lisp entirely and switch to Java or another top-5 language. Maybe if you can quantify this cost, I'll give it more thought. It also seems rather unfair in the context of CL, because the way adoption of say new language features often happens in other ecosystems is by force, but Lisp has a static standard, so adoption otherwise means adoption of libraries or frameworks where incidentally some macros come along for the ride. e.g. I think easy-route's defroute is widely adopted for users of hunchentoot, but will never be for CL users in general because it's only relevant for webdev. And fare's favorite macro, nest, is part of uiop and so basically part of every CL out there out of the box -- how's that for availability if not adoption -- but I think its adoption is and will remain rather small, because the problem it solves can be solved in multiple ways (my favorite: just use more functions) and the most egregious cases of attacking the right margin don't come up all that often. Incidentally, it's another case in point on lines of code, the CL implementation is a one liner and easy to understand (and like all macros rather easy to test/verify with macroexpand) but the Scheme implementation is a bit more sophisticated: https://fare.livejournal.com/189741.html
What's your cost estimate on a simple version of the {} macro shown in https://news.ycombinator.com/item?id=1611453 ? One could write it differently, but it's actually pretty robust to things like duplicate keys or leaving keys out, it's clear, and the use of a helper function aids debuggability (popularized most in call-with-* macro expansions). However, I would not use it as-is with that implementation, because it suffers from the same flaw as Lisp's quote-lists '(1 2 3) and array reader macro #(1 2 3) that keep me from using either of those most of the time as well. (For passerby readers, the flaw is that if you have an element like "(1+ 3)", that unevaluated list itself is the value, rather than the computation it's expressing. It's ugly to quasiquote and unquote what are meant to be data structure literals, so I just use the list/vector functions. That macro can be fixed on this though by changing the "hash `,(read-..." text to "hash (list ,@(read-...)". I'd also change the hash table key test.)
Please try to respond to my argument without 1) straw-manning it, 2) or reading a bunch into it that isn't there.
You made a point about the macro only costing a few lines of code. That is not a useful way to look at macros, as I can attest having written any number of short macros that I in retrospect probably shouldn't have written, and one or two ill-conceived attempts at DSLs.
Sometimes fewer lines of code is not better. Code golfing is not, in and of itself, a worthy engineering goal. The most important aims to abstraction are clarity and facility, and if you do not keep those in mind as you're shoving things into macros and subroutines and code-sharing between different parts of the codebase that should not be coupled, you are only going to lead you and your teammates to grief.
Things have costs. Recognize what the costs are. Use macros judiciously.
I started with my two questions not to strawman, but to find out if there was some underlying point or argument you had in mind that prompted you to make such a short reply in the first place. All I could read in it was not an argument, but a high level assertion, and not any sort of call to action. That's fine, I normally would have ignored it, but I felt like riffing on my disagreement with that assertion. To reiterate, I think you can reasonably measure cost through lines of code, even if that shouldn't be the only or primary metric, and I provided some outside-my-experience justifications, including one that suggests that an easy to measure metric like lines of code correlates with notoriously harder to measure metrics like the three things you stated. (If cost is to be measured by clarity -- how do you even measure clarity? Halstead provides one method, it's not the only one, but if we're going to use the word "measure", I prefer concrete and independently repeatable ways to get the same measurement value. Sometimes the measurement is just a senior person on a team saying something is unclear, often if you get another senior's opinion they'll say the same thing, but it'd be nice if we could do better.)
Now you've expanded yourself, thanks. I mostly agree. Quibble around size is "not a useful way" -- a larger macro is more likely to be more complex, difficult to understand, buggy, and harder to maintain, so it better be enabling a correspondingly large amount of utility. But it doesn't necessarily have to be complex, it could just be large but wrapping a lot of trivial boilerplate. DSL-enabling macros are often large but I don't think they justify themselves much of the time. And I've also regretted some one-line macros. Length can't be the only thing to look at, but it has a place. I'd much rather be on the hook for dealing with a short macro than a large one. Independent of size, I rather dislike how macros in general can break interactive development. What's true for macros is that they're not something to spray around willy-nilly, it's a lot less true to say the same about functions.
If you asked, I don't think I'd have answered that those two things are the most important aims to abstraction, but they're quite important for sure, and as you say the same problems can come with ill-made subroutines, not just macros. I agree overall with your last two paragraphs, and the call to action about recognizing costs and using macros judiciously. (Of course newbies will ask "how to be judicious?" but that's another discussion.)
That's not implementing a literal (an object that can be read), but a short hand notation for constructor code. The idea of a literal is that it is an object created at read-time and not at runtime.
In Common Lisp every literal notation returns an object, when read -> at read-time. The {} example does not, because the read macro creates code and not a literal object of type hash-table. The code then needs to be executed to create an object -> which then happens at runtime.
> literal adj. (of an object) referenced directly in a program rather than being computed by the program; that is, appearing as data in a quote form, or, if the object is a self-evaluating object, appearing as unquoted data. ``In the form (cons "one" '("two")), the expressions "one", ("two"), and "two" are literal objects.''
CL-USER 4 > (read-from-string "1")
1
1
CL-USER 5 > (read-from-string "(1 2 3)") ; -> which needs quoting in code, since the list itself doubles in Lisp as an operator call
(1 2 3)
7
CL-USER 6 > (read-from-string "1/2")
1/2
3
CL-USER 7 > (read-from-string "\"123\"")
"123"
5
CL-USER 8 > (read-from-string "#(1 2 3)")
#(1 2 3)
8
But the {} notation is not describing a literal, it creates code, when read, not an object of type hash-table.
This also means that (quote {:a 1}) generates a list and not a hash-table when evaluated. A literal can be quoted. The QUOTE operator prevents the object from being evaluated.
In above example there is no hash-table embedded in the code. Instead each call to FOO will create a fresh new hash-table at runtime. That's not the meaning of a literal in Common Lisp.
Thanks for the clarification on the meaning of "literal" in Common Lisp, I'll try to keep that in mind in the future. My meaning was more in the sense of literals being some textual equivalent representation for a value. Whether or not computation behind the scenes happens at some particular time (read/compile/run) isn't too relevant. For example in Python, one could write:
a = list()
a.append(1)
a.append(2)
a.append(1+3)
You can call repr(a) to get the canonical string representation of the object. This is "[1, 2, 4]". Python's doc on repr says that for many object types, including most builtins, eval(repr(obj)) == obj. Indeed eval("[1, 2, 4]") == a. But what's more, Python supports a "literal" syntax, where you can type in source code, instead of those 4 lines:
b = [1, 2, 1+3]
And b == a, despite this source not being exactly equal at the string-diff level to the repr() of either a or b. The fact that there was some computation of 1+3 that took place at some point, or in a's case that there were a few method calls, is irrelevant to the fact that the final (runtime) value of both a and b is [1, 2, 4]. That little bit of computation of the element is usually expected in other languages that have this sort of way to specify structured values, too, Lisp's behavior trips up newcomers (and Clojure's as well for simple lists, but not for vectors or maps).
Do you have any suggestions on how to talk about this "literal syntax" in another way that won't step on or cause confusion with the CL spec's definition?
> Whether or not computation behind the scenes happens at some particular time (read/compile/run) isn't too relevant.
Actually it is relevant: is the object mutable? Are new objects created? What optimizations can a compiler do? Is it an object which is a part of the source code?
If we allow [1, 2, (+ 1 a)] in a function as a list notation, then we have two choices:
1) every invocation of [1, 2, (+ 1 a)] returns a new list.
2) every invocation of [1, 2, (+ 1 a)] returns a single list object, but modifies the last slot of the list. -> then the list needs to be mutable.
(defun foo (a)
[1, 2, (+ 1 a)])
Common Lisp in general assumes that in
(defun foo (a)
'(1 2 3))
it is undefined what exact effects the attempts to modify the quoted list (1 2 3) has. Additionally the elements are not evaluated. We have to assume that the quoted list (1 2 3) is a literal constant.
Thus FOO
* returns ONE object. It does not cons new lists at runtime.
* modifying the list may be not possible. A compiler might allocate such an object in a read-only memory segment (that would be a rate feature -> but it might happen on architectures like iOS where machine code is by default not mutable).
* attempts to modify the list may be detected.
SBCL:
* (let ((a '(1 2 3))) (setf (car a) 4) a)
; in: LET ((A '(1 2 3)))
; (SETF (CAR A) 4)
;
; caught WARNING:
; Destructive function SB-KERNEL:%RPLACA called on constant data: (1 2 3)
; See also:
; The ANSI Standard, Special Operator QUOTE
; The ANSI Standard, Section 3.7.1
;
; compilation unit finished
; caught 1 WARNING condition
(4 2 3)
* attempts to modify literal constants may modify coalesced lists
In above function, a file compiler might detect that similar lists are used and allocate only one object for both variables.
The value of (foo) can be T, NIL, a warning might be signalled or an error might be detected.
So Common Lisp really pushes the idea that in source code these literals should be treated as immutable constant objects, which are a part of the source code.
Even for structures: (defun bar () #S(PERSON :NAME "Joe" :AGE a)) -> A is not evaluated, BAR returns always the same object.
> Do you have any suggestions on how to talk about this "literal syntax" in another way that won't step on or cause confusion with the CL spec's definition?
Actually I was under the impression that "literal" in a programming language often means "constant object".
Though it's not surprising that language may assume different, more dynamic, semantics for compound objects like lists, vectors, hash tables or OOP objects. Especially for languages which are focused more on developer convenience, than on compiler optimizations. Common Lisp there does not provide an object notation with default component evaluation, but assumes that one uses functions for object creation in this case.
Yeah, again I meant irrelevant to those who share such a broader ("dynamic" is a fun turn of phrase) definition of "literal" as I was using, it's very relevant to CL. I thought of mentioning the CL undefined behavior around modification you brought up explicitly in the first comment as yet another reason I try to avoid using #() and quoted lists, but it seemed like too much of an aside in an already long aside. ;) But while in aside-mode, this behavior I really think is quite a bad kludge of the language, and possibly the best thing Clojure got right was its insistence on non-place-oriented values. But it is what it is.
Bringing up C is useful because I know a similar "literal" syntax has existed since C99 for structs, and is one of the footguns available to bring up if people start forgetting that C is not a subset of C++. Looks like they call it "compound literals": https://en.cppreference.com/w/c/language/compound_literal (And of course you can type expressions like y=1+4 that result in the struct having y=5.) And it also notes about possible string literal sharing. One of the best things Java got right was making strings immutable...
> The ability to evaluate code interactively without recompilation
SBCL and other implementations compile code to machine code then execute it. That is to say, when a form is submitted to the REPL, the form is not interpreted, but first compiled then executed. The reason execution finishes quickly is because compilation finishes quickly.
There are some implementations, like CCL, with a special interpreter mode exclusively for REPL-usage.[1] However, at least SBCL and ECL will compile code, not interpret.
I specifically talk about the fast evaluator for SBCL. But even without that contrib, SBCL does have another evaluator as well that's used in very specific circumstances.
I think a lot of this is confusion between online versus batch compilation? Most of us have only ever seen/used batch compilation. To that end, many people assume that JIT in an interpreter is how online compilation is done.
I probably am more guilty of that than I should be.
I confess I wasn't positive what the correct term would be. "Online" is common for some uses of it. And I "knew" that what we call compilation for most programs used to be called "batch compilation." Searching the term was obnoxious, though, such that I gave up. :(
There are 1980's papers about Lisp compilers competing with Fortran compilers, unfortunately with the AI Winter, and the high costs of such systems, people lost sight of it.
Well, I imagine at the time they had some LISP implementations that were very well tuned for specific high end machines, which essentially duplicated Fortran functionality. This is difficult to do for general purpose Lisps like SBCL. It was also probably very expensive.
There are some libraries that make maps and the like usable with a cleaner syntax. You too could make some macros of your own for the same purpose, if syntax is the concern
This is super neat - SBCL is an awesome language implementation, and I've always wanted to do CL development for a "real" game console.
I'm also surprised (in a good way) that Shinmera is working on this - I've seen him a few times before on #lispgames and in the Lisp Discord, and I didn't know that he was into this kind of low-level development. I've looked at the guts of SBCL briefly and was frightened away, so kudos to him.
I wonder if SBCL (+ threading/SDL2) works on the Raspberry Pi now...
I'm not doing the SBCL parts, that's all Charles' work that I hired him for. My work is the portability bits that Trial relies on to do whatever and the general build architecture for this, along with the initial runtime stubbing.
My current unannounced project is a lot more ambitious still, being a 3D hack & slash action game. I post updates about that on the Patreon if you're interested.
Thanks to the author for the fascinating and detailed write up. It feels like a lot of the time this level of detail around the specifics of 'blessed' (not homebrew) console porting are only revealed years after the end of the consoles lifetime.
As an aside, reading about this kind of deeply interesting work always makes me envious when I think about the rote software I spend all day writing :)
At least back when I was working these "blessed" tools were usually a tad hacked together, modern homebrew toolchains for many older platforms are better except for debugging support (since the devkits for the machines usually had better hooks available but also avoiding the entire GDB focus).
Having been in both worlds, i'm not entirely sure there's that much to be envious of.
as I was just sitting down to another day of ruby on rails (that I am grateful for!) I was thinking.. I wonder what hobby/open source projects could use some of my attention later..
.. what projects my attention could use later .. :D
You cannot publish games with homebrew, it has to use the official SDK. Besides that, almost nobody has a jailbroken Switch, so it would make it extremely hard to play any games on anything but an emulator.
This isn't really my scene so I don't know the details, but I remember reading that the first 10+ million Switches produced have an unpatchable bootloader exploit. I'm sure you're correct that almost nobody actually has a hacked console, but my understanding is that they're readily available for people who want one.
Yeah, maybe. But also the official SDKs are pretty good and you get support from Nintendo. It seems like a pretty big risk to use an unsupported SDK... for what benefit?
I don't have access to Nintendo's SDK so I can't compare directly, but the article cites an inability to map executable pages. libnx supports this (but of course, this is moot if Nintendo wouldn't let you ship it). But the main benefit is being able to talk about and share your work without worrying about violating an NDA.
The OS can do it, and some Nintendo titles on the Switch do use this capability, but I have talked to Nintendo directly about using it, and it's a hard No. I can't even use the JIT feature purely for dev.
You don’t let your kids jailbreak their Switch. Because it’s a damn online system, so any leaked info and Nintendo can brick the Switch. And their game states are far too valuable for the kids for that.
Trey can ban the Switch, but offline games will continue to work. Also the account doesn't get banned, so you can buy a new one. (Speaking from experience, unfortunately) You can still play the new Zelda, just can't play Splatoon, Mario Kart, or Smash online then on the banned Switch. It's possible but arduous to rescue the saves off the banned Switch if you have access to a second modded Switch that is not banned (also speaking from experience) and use homebrew to back up and restore your saves, then launch them all from sysMMC with legitimately owned versions of those games and let the cloud save feature kick in. Animal Crossing has a separate dedicated save tool.
Block Nintendo servers, disable auto updates, use separate sysMMC and emuMMC with no unauthorized games or DLC run on the sysMMC. If you follow the main guide everyone uses now, it's pretty safe. But updating becomes a more difficult and manual process. Have to grab a zip of the new firmware from the 'net on your PC and copy it to the SD card to be installed via a homebrew method. Installing games, game updates, and DLC is similarly manual. It's not like the PS3, Vita, and 3DS(?) where you can pull it all off of official servers easily.
Oh yeah, and we're stuck with a "tethered jailbreak", that's perhaps the worst part. Any time you turn off the hacked Switch it needs to be sent a payload from your PC or phone to boot up again then.
Whether it's all worth it depends on your needs I suppose. You could get a bunch of tournament setups going with Smash (or another fighting game) + all DLC for your LAN party and save a bit of money. You can try out new singleplayer games before buying them physically. You can mod games and run emulators. Honestly the Switch scene seems largely less cool than what we had with the 3DS or Wii (Wii U was a little disappointing as well). I barely touch my Switch(es) since getting a Steam Deck.
I was under the impression the modchips were only for non-launch Switches that didn't have the old exploit available and that they were basically doing the same thing. How do they work differently?
Context: Naughty Dog used a custom Lisp-alike (GOAL) to build the Jak & Daxter series on PS2. They left enough debugging information in that it was possible to reverse engineer. The OpenGOAL project has done so, and these games can now be run on all platforms that their GOAL compiler gets ported to (x86 for now AFAIK). Would be cool to port this to the Switch.
I've just bought Kandria. I'm not much of a game player so I probably won't get much play out of it, but Shinmera is clearly pushing the bounds of the Lisp world, and that's something to support.
I wish the likes of Nintendo and Sony themselves finance such efforts. I mean it's one another way to create games (IP) for your console, what possibly could be the downside of starting something similar to Github Accelerator for your platform?
Because it's well established that game developers can and will jump through whatever hoops the platform holder demands at their own expense, they don't have the leverage to be picky about the technical details when deciding which platforms to ship on. Nintendo doesn't need to create new incentives to release on the Switch when they already have the biggest incentive of all: 140+ million units sold, and a high attach rate.
At least there isn't as much hoop jumping as there used to be, since the systems have all converged on using commodity CPU and GPU architectures with at most minor embellishments.
Yeah, also whatever they would build, the other platform vendors won't choose the same thing, and it wont be the exact variant of lisp or w/e that even the few nice developers would want.
I wish vendors would be just more supportive of different llvm tool chains. Rust isn't even well supported.
Rust isn't even available on Android NDK, even though is now used on Android.
Same applies to Rust on Windows, and whatever Microsoft is doing, windows-rs isn't that great, and after what the team did with C++/WinRT I don't have high expectations.
So consoles have even less reasons to support Rust.
This is what I come to HN for. Kudos to OP and their colleague. I know it's impossible but what a blessing it would be if Nintendo could be a little more open about their system.
> Steel Bank Common Lisp (SBCL) is a high performance Common Lisp compiler. It is open source / free software, with a permissive license. In addition to the compiler and runtime system for ANSI Common Lisp, it provides an interactive environment including a debugger, a statistical profiler, a code coverage tool, and many other extensions.
See also https://github.com/attractivechaos/plb2 ...where I provided the SBCL solutions, so there's probably still a significant chunk of performance to be squeezed out.
How's CL's GC performance for games nowadays? I've been slightly eyeing the upcoming Autumn Lisp Game Jam myself, but last I checked all the major libre CL impls, including SBCL, still used a full stop-the-world collector, which feels like a recipe for latency spikes. I saw flashes of stuff on sbcl-devel about someone working on a lower-latency one, but I don't know whether it got anywhere.
Great article. One question I had, not to diminish this hard work, is why not use a different implementation like ECL which is pretty portable already and can compile to static C code which can just be compiled traditionally for the target? I've been doing that for a Wasm + SDL2 game in Lisp and it's been (relatively) straightforward. I suppose performance might be a issue.
How is this thing useful? Does it mean you can use the Trial to develop games completely free hassle on NS after its done, and you don't need a popular engine like Unity, Godot, Unreal?
I don't have a mac, let alone a silicon one, so I can't test on it (I also have no patience for Apple's BS). However, it should work. At least SBCL itself runs, and Trial is mostly portable code, so it should, too, modulo some regressions.
The Switch can support a USB keyboard, which would be the nice way to do it. There's already a fair number of officially published games with keyboard and/or mouse support, including a couple of programming ones. It has an on-screen keyboard too of course but you wouldn't want to rely on that more than absolutely necessary.
I hope this port succeeds.
[1]: https://github.com/Shirakumo/trial