Hacker News new | past | comments | ask | show | jobs | submit login
Porting SBCL to the Nintendo Switch (tymoon.eu)
411 points by todsacerdoti 4 days ago | hide | past | favorite | 80 comments





I have being using Trial[1] for the past few weeks to test out game development in Common Lisp, and have been having a great time. Being able to alter (almost) all aspects of your game while it's running is a blessing.

I hope this port succeeds.

[1]: https://github.com/Shirakumo/trial


Lisp languages seem well-suited for building games. The ability to evaluate code interactively without recompilation is a huge deal for feature building, incremental development, and bug-fixing. Retaining application state between code changes seems like it would be incredibly useful. Common Lisp also appears to be a much faster language than I would have blindly assumed.

The main downside for me (in general, not just for game programming) is the clunkiness in using data structures - maps especially. But the tradeoff seems worth it.


One of the downsides is that implementations like SBCL have a deep integration and need things like a well performing GC implementation - to get this running on specialized game hardware is challenging. The article describes that. Getting over the hurdle of the low-level integration is difficult. The reward comes, when one gets to the point, where the rapid incremental development cycles of Common Lisp, even with connected devices, kicks in.

For the old historic Naughty Dog use case, it was a development system written in Common Lisp on an SGI and a C++ runtime with low-level Scheme code on the Playstation.

> Common Lisp also appears to be a much faster language than I would have blindly assumed.

There are two modes:

1) fast optimized code which allows for some low-level stuff to stay with Common Lisp

2) unoptimized, but natively compiled code, which enables safe (-> the runtime does not crash) interactive and incremental development -> this mode is where much of the software can run nowadays and which is still "fast enough" for many use cases


Except for occasionally using a small embedded Scheme in C++ when I worked at Angel Studios, I haven’t much experience using Lisp languages for games.

That said I have a question: is it a common pattern when using Lisp languages for games to use a flyweight object reuse pattern? This would minimize the need for GC.


If that's your main downside, that's pretty good, since clunkiness is in many ways fixable. Personally with standard CL I like to use property lists with keywords, so a "map literal" is just (list :a 3 :b 'other). It's fine when the map is small. The getter is just getf, setting is the usual setf around the getter. There's a cute way to loop by #'cddr for a key-and-value loop, though Alexandria (a very common utility library) has some useful utils for looping/removing/converting plists as well.

If typing out "(list ...)" is annoying, it's a few lines of code to let you type {:a 3 :b 4} instead, like Clojure. And the result of that can be a plist, or a hash table, or again like Clojure one of the handful of immutable map structures available. You can also easily make the native hash tables print themselves out with the curly bracket syntax.

(On the speed front, you might be amused by https://renato.athaydes.com/posts/how-to-write-slow-rust-cod... But separately, when you want to speed up Lisp (with SBCL) even more than default, it's rather fun to be able to run disassemble on your function and see what it's doing at the assembly level, and turn up optimization hints and have the compiler start telling you (even on the individual function level) about where it has to use e.g. generic addition instead of a faster assembly instruction because it can't prove type info and you'll have to tell it/fix your code. It can tell you about dead code it removed. You can define stack-allocation if needed. Simple benchmarking that also includes processor cycles and memory allocated is available immediately with the built-in time macro...)


The cost of a macro is not measured in lines of code. It's measured in things like adoption, clarity, and debuggability.

Things have costs, what's your underlying point? That one shouldn't create such a macro, even if it's a one-liner, because of unquantified costs or concerns...?

Singling out individual macros for "cost" analysis this way is very weird to me. I disagree entirely. Everything has costs, not just macros, and if you're doing an analysis you need to include the costs of not having the thing (i.e. the benefits of having it). Anyway whether it's a reader macro, compiler macro, or normal function, lines of code is actually a great proxy measure to all sorts of things, even if it can be an abused measure. When compared to other more complex metrics like McCabe's cyclomatic complexity, or Halstead’s Software Science metrics (which uses redundancy of variable names to try and quantify something like clarity and debuggability), the correlations with simple lines of code are high. (See for instance https://www.oreilly.com/library/view/making-software/9780596... which you can find a full pdf of in the usual places.) But the correlations aren't 1, and indeed there's an important caveat against making programs too short. Though a value you didn't mention which I think can factor into cost is one of "power", where shorter programs (and languages that enable them) are generally seen as more powerful, at least for that particular area of expression. Shorter programs is one of the benefits of higher level languages. And besides power, I do think fewer lines of code most often corresponds to superior clarity and debuggability (and of course fewer bugs overall, as other studies will tell you), even if code golfing can take it too far.

I wouldn't put much value in any cost due to a lack of adoption, because as soon as you do that, you've given yourself a nice argument to drop Lisp entirely and switch to Java or another top-5 language. Maybe if you can quantify this cost, I'll give it more thought. It also seems rather unfair in the context of CL, because the way adoption of say new language features often happens in other ecosystems is by force, but Lisp has a static standard, so adoption otherwise means adoption of libraries or frameworks where incidentally some macros come along for the ride. e.g. I think easy-route's defroute is widely adopted for users of hunchentoot, but will never be for CL users in general because it's only relevant for webdev. And fare's favorite macro, nest, is part of uiop and so basically part of every CL out there out of the box -- how's that for availability if not adoption -- but I think its adoption is and will remain rather small, because the problem it solves can be solved in multiple ways (my favorite: just use more functions) and the most egregious cases of attacking the right margin don't come up all that often. Incidentally, it's another case in point on lines of code, the CL implementation is a one liner and easy to understand (and like all macros rather easy to test/verify with macroexpand) but the Scheme implementation is a bit more sophisticated: https://fare.livejournal.com/189741.html

What's your cost estimate on a simple version of the {} macro shown in https://news.ycombinator.com/item?id=1611453 ? One could write it differently, but it's actually pretty robust to things like duplicate keys or leaving keys out, it's clear, and the use of a helper function aids debuggability (popularized most in call-with-* macro expansions). However, I would not use it as-is with that implementation, because it suffers from the same flaw as Lisp's quote-lists '(1 2 3) and array reader macro #(1 2 3) that keep me from using either of those most of the time as well. (For passerby readers, the flaw is that if you have an element like "(1+ 3)", that unevaluated list itself is the value, rather than the computation it's expressing. It's ugly to quasiquote and unquote what are meant to be data structure literals, so I just use the list/vector functions. That macro can be fixed on this though by changing the "hash `,(read-..." text to "hash (list ,@(read-...)". I'd also change the hash table key test.)

A basically identical version at the top most level is here https://github.com/mikelevins/folio2/blob/master/src/maps-sy... that turns the map into an fset immutable map instead, minor changes would let you avoid needing to use folio2's "as" function.


Please try to respond to my argument without 1) straw-manning it, 2) or reading a bunch into it that isn't there.

You made a point about the macro only costing a few lines of code. That is not a useful way to look at macros, as I can attest having written any number of short macros that I in retrospect probably shouldn't have written, and one or two ill-conceived attempts at DSLs.

Sometimes fewer lines of code is not better. Code golfing is not, in and of itself, a worthy engineering goal. The most important aims to abstraction are clarity and facility, and if you do not keep those in mind as you're shoving things into macros and subroutines and code-sharing between different parts of the codebase that should not be coupled, you are only going to lead you and your teammates to grief.

Things have costs. Recognize what the costs are. Use macros judiciously.


I started with my two questions not to strawman, but to find out if there was some underlying point or argument you had in mind that prompted you to make such a short reply in the first place. All I could read in it was not an argument, but a high level assertion, and not any sort of call to action. That's fine, I normally would have ignored it, but I felt like riffing on my disagreement with that assertion. To reiterate, I think you can reasonably measure cost through lines of code, even if that shouldn't be the only or primary metric, and I provided some outside-my-experience justifications, including one that suggests that an easy to measure metric like lines of code correlates with notoriously harder to measure metrics like the three things you stated. (If cost is to be measured by clarity -- how do you even measure clarity? Halstead provides one method, it's not the only one, but if we're going to use the word "measure", I prefer concrete and independently repeatable ways to get the same measurement value. Sometimes the measurement is just a senior person on a team saying something is unclear, often if you get another senior's opinion they'll say the same thing, but it'd be nice if we could do better.)

Now you've expanded yourself, thanks. I mostly agree. Quibble around size is "not a useful way" -- a larger macro is more likely to be more complex, difficult to understand, buggy, and harder to maintain, so it better be enabling a correspondingly large amount of utility. But it doesn't necessarily have to be complex, it could just be large but wrapping a lot of trivial boilerplate. DSL-enabling macros are often large but I don't think they justify themselves much of the time. And I've also regretted some one-line macros. Length can't be the only thing to look at, but it has a place. I'd much rather be on the hook for dealing with a short macro than a large one. Independent of size, I rather dislike how macros in general can break interactive development. What's true for macros is that they're not something to spray around willy-nilly, it's a lot less true to say the same about functions.

If you asked, I don't think I'd have answered that those two things are the most important aims to abstraction, but they're quite important for sure, and as you say the same problems can come with ill-made subroutines, not just macros. I agree overall with your last two paragraphs, and the call to action about recognizing costs and using macros judiciously. (Of course newbies will ask "how to be judicious?" but that's another discussion.)


> simple version of the {} macro shown in https://news.ycombinator.com/item?id=1611453

That's not implementing a literal (an object that can be read), but a short hand notation for constructor code. The idea of a literal is that it is an object created at read-time and not at runtime.

In Common Lisp every literal notation returns an object, when read -> at read-time. The {} example does not, because the read macro creates code and not a literal object of type hash-table. The code then needs to be executed to create an object -> which then happens at runtime.

The ANSI CL glossary says:

https://www.lispworks.com/documentation/HyperSpec/Body/26_gl...

> literal adj. (of an object) referenced directly in a program rather than being computed by the program; that is, appearing as data in a quote form, or, if the object is a self-evaluating object, appearing as unquoted data. ``In the form (cons "one" '("two")), the expressions "one", ("two"), and "two" are literal objects.''

    CL-USER 4 > (read-from-string "1")
    1
    1

    CL-USER 5 > (read-from-string "(1 2 3)")   ; -> which needs quoting in code, since the list itself doubles in Lisp as an operator call
    (1 2 3)
    7

    CL-USER 6 > (read-from-string "1/2")
    1/2
    3

    CL-USER 7 > (read-from-string "\"123\"")
    "123"
    5

    CL-USER 8 > (read-from-string "#(1 2 3)")
    #(1 2 3)
    8
But the {} notation is not describing a literal, it creates code, when read, not an object of type hash-table.

    CL-USER 9 > (read-from-string "{:foo bar}")
    (LET ((HASH (MAKE-HASH-TABLE))) (SET-HASH-VALUES HASH (QUOTE (:FOO BAR))) HASH)
    10
This also means that (quote {:a 1}) generates a list and not a hash-table when evaluated. A literal can be quoted. The QUOTE operator prevents the object from being evaluated.

    CL-USER 13 > (quote {:a 1}) 
    (LET ((HASH (MAKE-HASH-TABLE))) (SET-HASH-VALUES HASH (QUOTE (:A 1))) HASH)

    CL-USER 14 > '(defun foo () "ab cd")
    (DEFUN FOO NIL "ab cd")
In above example the string is a literal object in the code.

    CL-USER 15 > '(defun foo () {:foo bar})
    (DEFUN FOO NIL (LET ((HASH (MAKE-HASH-TABLE))) (SET-HASH-VALUES HASH (QUOTE (:FOO BAR))) HASH))
In above example there is no hash-table embedded in the code. Instead each call to FOO will create a fresh new hash-table at runtime. That's not the meaning of a literal in Common Lisp.

Thanks for the clarification on the meaning of "literal" in Common Lisp, I'll try to keep that in mind in the future. My meaning was more in the sense of literals being some textual equivalent representation for a value. Whether or not computation behind the scenes happens at some particular time (read/compile/run) isn't too relevant. For example in Python, one could write:

    a = list()
    a.append(1)
    a.append(2)
    a.append(1+3)
You can call repr(a) to get the canonical string representation of the object. This is "[1, 2, 4]". Python's doc on repr says that for many object types, including most builtins, eval(repr(obj)) == obj. Indeed eval("[1, 2, 4]") == a. But what's more, Python supports a "literal" syntax, where you can type in source code, instead of those 4 lines:

    b = [1, 2, 1+3]
And b == a, despite this source not being exactly equal at the string-diff level to the repr() of either a or b. The fact that there was some computation of 1+3 that took place at some point, or in a's case that there were a few method calls, is irrelevant to the fact that the final (runtime) value of both a and b is [1, 2, 4]. That little bit of computation of the element is usually expected in other languages that have this sort of way to specify structured values, too, Lisp's behavior trips up newcomers (and Clojure's as well for simple lists, but not for vectors or maps).

Do you have any suggestions on how to talk about this "literal syntax" in another way that won't step on or cause confusion with the CL spec's definition?


> Whether or not computation behind the scenes happens at some particular time (read/compile/run) isn't too relevant.

Actually it is relevant: is the object mutable? Are new objects created? What optimizations can a compiler do? Is it an object which is a part of the source code?

If we allow [1, 2, (+ 1 a)] in a function as a list notation, then we have two choices:

1) every invocation of [1, 2, (+ 1 a)] returns a new list.

2) every invocation of [1, 2, (+ 1 a)] returns a single list object, but modifies the last slot of the list. -> then the list needs to be mutable.

    (defun foo (a)
      [1, 2, (+ 1 a)])
Common Lisp in general assumes that in

    (defun foo (a)
     '(1 2 3))
it is undefined what exact effects the attempts to modify the quoted list (1 2 3) has. Additionally the elements are not evaluated. We have to assume that the quoted list (1 2 3) is a literal constant.

Thus FOO

* returns ONE object. It does not cons new lists at runtime.

* modifying the list may be not possible. A compiler might allocate such an object in a read-only memory segment (that would be a rate feature -> but it might happen on architectures like iOS where machine code is by default not mutable).

* attempts to modify the list may be detected.

SBCL:

    * (let ((a '(1 2 3))) (setf (car a) 4) a)
    ; in: LET ((A '(1 2 3)))
    ;     (SETF (CAR A) 4)
    ; 
    ; caught WARNING:
    ;   Destructive function SB-KERNEL:%RPLACA called on constant data: (1 2 3)
    ;   See also:
    ;     The ANSI Standard, Special Operator QUOTE
    ;     The ANSI Standard, Section 3.7.1
    ; 
    ; compilation unit finished
    ;   caught 1 WARNING condition
    (4 2 3)
* attempts to modify literal constants may modify coalesced lists

for example

    (defun foo ()
      (let ((a '(1 2 3))
            (b '(1 2 3)))
        (setf (car a) 10)
        (eql (car a) (car b))))
In above function, a file compiler might detect that similar lists are used and allocate only one object for both variables.

The value of (foo) can be T, NIL, a warning might be signalled or an error might be detected.

So Common Lisp really pushes the idea that in source code these literals should be treated as immutable constant objects, which are a part of the source code.

Even for structures: (defun bar () #S(PERSON :NAME "Joe" :AGE a)) -> A is not evaluated, BAR returns always the same object.

> Do you have any suggestions on how to talk about this "literal syntax" in another way that won't step on or cause confusion with the CL spec's definition?

Actually I was under the impression that "literal" in a programming language often means "constant object".

See for example string literals in C:

https://wiki.sei.cmu.edu/confluence/display/c/STR30-C.+Do+no...

Though it's not surprising that language may assume different, more dynamic, semantics for compound objects like lists, vectors, hash tables or OOP objects. Especially for languages which are focused more on developer convenience, than on compiler optimizations. Common Lisp there does not provide an object notation with default component evaluation, but assumes that one uses functions for object creation in this case.


Yeah, again I meant irrelevant to those who share such a broader ("dynamic" is a fun turn of phrase) definition of "literal" as I was using, it's very relevant to CL. I thought of mentioning the CL undefined behavior around modification you brought up explicitly in the first comment as yet another reason I try to avoid using #() and quoted lists, but it seemed like too much of an aside in an already long aside. ;) But while in aside-mode, this behavior I really think is quite a bad kludge of the language, and possibly the best thing Clojure got right was its insistence on non-place-oriented values. But it is what it is.

Bringing up C is useful because I know a similar "literal" syntax has existed since C99 for structs, and is one of the footguns available to bring up if people start forgetting that C is not a subset of C++. Looks like they call it "compound literals": https://en.cppreference.com/w/c/language/compound_literal (And of course you can type expressions like y=1+4 that result in the struct having y=5.) And it also notes about possible string literal sharing. One of the best things Java got right was making strings immutable...


> The ability to evaluate code interactively without recompilation

SBCL and other implementations compile code to machine code then execute it. That is to say, when a form is submitted to the REPL, the form is not interpreted, but first compiled then executed. The reason execution finishes quickly is because compilation finishes quickly.

There are some implementations, like CCL, with a special interpreter mode exclusively for REPL-usage.[1] However, at least SBCL and ECL will compile code, not interpret.

[1] https://github.com/Clozure/ccl/blob/v1.13/level-1/l1-readloo...


I specifically talk about the fast evaluator for SBCL. But even without that contrib, SBCL does have another evaluator as well that's used in very specific circumstances.

I think a lot of this is confusion between online versus batch compilation? Most of us have only ever seen/used batch compilation. To that end, many people assume that JIT in an interpreter is how online compilation is done.

I probably am more guilty of that than I should be.


> online compilation

? incremental compilation


I confess I wasn't positive what the correct term would be. "Online" is common for some uses of it. And I "knew" that what we call compilation for most programs used to be called "batch compilation." Searching the term was obnoxious, though, such that I gave up. :(

Do either CCL or SBCL have any kind of partial evaluation or tracing compilation?

> the form is not interpreted, but first compiled then executed

That's TempleOS technology right there.


Other way around.

There are 1980's papers about Lisp compilers competing with Fortran compilers, unfortunately with the AI Winter, and the high costs of such systems, people lost sight of it.

Well, I imagine at the time they had some LISP implementations that were very well tuned for specific high end machines, which essentially duplicated Fortran functionality. This is difficult to do for general purpose Lisps like SBCL. It was also probably very expensive.

What is difficult is having Apple, Google, IBM, Microsoft, Intel, NVidia, AMD,.... compiler teams budget.

As well as high end machines built for Lisp.

There are some libraries that make maps and the like usable with a cleaner syntax. You too could make some macros of your own for the same purpose, if syntax is the concern

OT but is there an easy way to build a game with Godot or Unity and deploy it to the Switch? I'd love this for my son.

This is super neat - SBCL is an awesome language implementation, and I've always wanted to do CL development for a "real" game console.

I'm also surprised (in a good way) that Shinmera is working on this - I've seen him a few times before on #lispgames and in the Lisp Discord, and I didn't know that he was into this kind of low-level development. I've looked at the guts of SBCL briefly and was frightened away, so kudos to him.

I wonder if SBCL (+ threading/SDL2) works on the Raspberry Pi now...


I'm not doing the SBCL parts, that's all Charles' work that I hired him for. My work is the portability bits that Trial relies on to do whatever and the general build architecture for this, along with the initial runtime stubbing.

And, as mentioned, *her :)


Holy cow, Kandria looks amazing. Is it also developed using Trial? https://www.youtube.com/watch?v=usc0Znm-gbA

Yes, it's also open source: https://github.com/shirakumo/kandria

My current unannounced project is a lot more ambitious still, being a 3D hack & slash action game. I post updates about that on the Patreon if you're interested.


- Is it not her?


Oh I did not know that she transitioned. I just remebered it was the author of portacle, that was a woman.

What a world of pain must be to have to go through it.

Kudos to her. Also what she does for lisp is amazing.


Thanks to the author for the fascinating and detailed write up. It feels like a lot of the time this level of detail around the specifics of 'blessed' (not homebrew) console porting are only revealed years after the end of the consoles lifetime.

As an aside, reading about this kind of deeply interesting work always makes me envious when I think about the rote software I spend all day writing :)


At least back when I was working these "blessed" tools were usually a tad hacked together, modern homebrew toolchains for many older platforms are better except for debugging support (since the devkits for the machines usually had better hooks available but also avoiding the entire GDB focus).

Having been in both worlds, i'm not entirely sure there's that much to be envious of.


well said!

as I was just sitting down to another day of ruby on rails (that I am grateful for!) I was thinking.. I wonder what hobby/open source projects could use some of my attention later..

.. what projects my attention could use later .. :D


> The answer to that is that while I would desperately like to share it all publicly, the NDA prevents us from doing so.

I'm curious what the rationale here was for using the official SDK, rather than the unencumbered "homebrew" ones[0].

As a complete guess, maybe Nintendo doesn't let you officially publish games built using 3rd party SDKs?

[0] https://switchbrew.org/wiki/Setting_up_Development_Environme...


You cannot publish games with homebrew, it has to use the official SDK. Besides that, almost nobody has a jailbroken Switch, so it would make it extremely hard to play any games on anything but an emulator.

> almost nobody has a jailbroken Switch

This isn't really my scene so I don't know the details, but I remember reading that the first 10+ million Switches produced have an unpatchable bootloader exploit. I'm sure you're correct that almost nobody actually has a hacked console, but my understanding is that they're readily available for people who want one.


> You cannot publish games with homebrew, it has to use the official SDK

This doesn't surprise me much, but does Nintendo state this explicitly anywhere public?

If Nintendo chose to sign an application developed using a 3rd party toolchain, there's no technical reason why it couldn't run on retail consoles.


Yeah, maybe. But also the official SDKs are pretty good and you get support from Nintendo. It seems like a pretty big risk to use an unsupported SDK... for what benefit?

I don't have access to Nintendo's SDK so I can't compare directly, but the article cites an inability to map executable pages. libnx supports this (but of course, this is moot if Nintendo wouldn't let you ship it). But the main benefit is being able to talk about and share your work without worrying about violating an NDA.

https://switchbrew.github.io/libnx/jit_8h.html

https://switchbrew.org/wiki/JIT_services


The OS can do it, and some Nintendo titles on the Switch do use this capability, but I have talked to Nintendo directly about using it, and it's a hard No. I can't even use the JIT feature purely for dev.

Really? What is the justification for allowing Nintendo titles to use it but not third parties? Security concerns?

That's what they claim, but ultimately it's their thing, so they do with it whatever they like.

You don’t let your kids jailbreak their Switch. Because it’s a damn online system, so any leaked info and Nintendo can brick the Switch. And their game states are far too valuable for the kids for that.

Trey can ban the Switch, but offline games will continue to work. Also the account doesn't get banned, so you can buy a new one. (Speaking from experience, unfortunately) You can still play the new Zelda, just can't play Splatoon, Mario Kart, or Smash online then on the banned Switch. It's possible but arduous to rescue the saves off the banned Switch if you have access to a second modded Switch that is not banned (also speaking from experience) and use homebrew to back up and restore your saves, then launch them all from sysMMC with legitimately owned versions of those games and let the cloud save feature kick in. Animal Crossing has a separate dedicated save tool.

Block Nintendo servers, disable auto updates, use separate sysMMC and emuMMC with no unauthorized games or DLC run on the sysMMC. If you follow the main guide everyone uses now, it's pretty safe. But updating becomes a more difficult and manual process. Have to grab a zip of the new firmware from the 'net on your PC and copy it to the SD card to be installed via a homebrew method. Installing games, game updates, and DLC is similarly manual. It's not like the PS3, Vita, and 3DS(?) where you can pull it all off of official servers easily.

Oh yeah, and we're stuck with a "tethered jailbreak", that's perhaps the worst part. Any time you turn off the hacked Switch it needs to be sent a payload from your PC or phone to boot up again then.

Whether it's all worth it depends on your needs I suppose. You could get a bunch of tournament setups going with Smash (or another fighting game) + all DLC for your LAN party and save a bit of money. You can try out new singleplayer games before buying them physically. You can mod games and run emulators. Honestly the Switch scene seems largely less cool than what we had with the 3DS or Wii (Wii U was a little disappointing as well). I barely touch my Switch(es) since getting a Steam Deck.


> we're stuck with a "tethered jailbreak"

Modchipping makes things permanent, although the soldering isn't for the faint of heart.


Related: https://opengoal.dev.

Context: Naughty Dog used a custom Lisp-alike (GOAL) to build the Jak & Daxter series on PS2. They left enough debugging information in that it was possible to reverse engineer. The OpenGOAL project has done so, and these games can now be run on all platforms that their GOAL compiler gets ported to (x86 for now AFAIK). Would be cool to port this to the Switch.


I've just bought Kandria. I'm not much of a game player so I probably won't get much play out of it, but Shinmera is clearly pushing the bounds of the Lisp world, and that's something to support.

Her work is just amazing. And makes me incredibly happy, as I like to write some CL here and there.

Aw, thank you. Happy hacking!

I wish the likes of Nintendo and Sony themselves finance such efforts. I mean it's one another way to create games (IP) for your console, what possibly could be the downside of starting something similar to Github Accelerator for your platform?

Because it's well established that game developers can and will jump through whatever hoops the platform holder demands at their own expense, they don't have the leverage to be picky about the technical details when deciding which platforms to ship on. Nintendo doesn't need to create new incentives to release on the Switch when they already have the biggest incentive of all: 140+ million units sold, and a high attach rate.

At least there isn't as much hoop jumping as there used to be, since the systems have all converged on using commodity CPU and GPU architectures with at most minor embellishments.


Yeah, also whatever they would build, the other platform vendors won't choose the same thing, and it wont be the exact variant of lisp or w/e that even the few nice developers would want.

I wish vendors would be just more supportive of different llvm tool chains. Rust isn't even well supported.


Rust isn't even available on Android NDK, even though is now used on Android.

Same applies to Rust on Windows, and whatever Microsoft is doing, windows-rs isn't that great, and after what the team did with C++/WinRT I don't have high expectations.

So consoles have even less reasons to support Rust.


They did in the past and the only thing people did was to port MAME and other emulators, or copies from old 8 and 16 bit days.

That is why we cannot have nice things.


b/c it isn't described anywhere...

SBCL - "Steel Bank Common Lisp"

> Steel Bank Common Lisp (SBCL) is a high performance Common Lisp compiler. It is open source / free software, with a permissive license. In addition to the compiler and runtime system for ANSI Common Lisp, it provides an interactive environment including a debugger, a statistical profiler, a code coverage tool, and many other extensions.

https://www.sbcl.org/


"Steel Bank" in turn is a homage to "Carnegie Mellon" of CMUCL.

According to the Benchmarks Game, SBCL is roughly as fast as Node.

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


See also https://github.com/attractivechaos/plb2 ...where I provided the SBCL solutions, so there's probably still a significant chunk of performance to be squeezed out.

How's CL's GC performance for games nowadays? I've been slightly eyeing the upcoming Autumn Lisp Game Jam myself, but last I checked all the major libre CL impls, including SBCL, still used a full stop-the-world collector, which feels like a recipe for latency spikes. I saw flashes of stuff on sbcl-devel about someone working on a lower-latency one, but I don't know whether it got anywhere.

see this detailed report: https://raw.githubusercontent.com/Shinmera/talks/master/els2...

> Overall we have needed to do surprisingly little actual performance analysis and optimisation work to make Kandria run well.


This is what I come to HN for. Kudos to OP and their colleague. I know it's impossible but what a blessing it would be if Nintendo could be a little more open about their system.

Somewhat offtopic, just flashed through my mind: you know what would be amazing and absolutely useless at the same time?

Porting Yuzu to Nintendo Switch


This has been done, and it’s been possible to run Switch on Switch for several months [0] (about 39 minutes into the video).

[0] https://youtu.be/H1gveQUBIKk


Shoot, it was a suspiciously genius idea* for a Friday 13th.

*: Compared to my usual ideas


Since I had to look it up, Yuzu is a Nintendo switch emulator.

Absolutely kudos to that effort. I love hacking around with CL and some Schemes, and an effort like this is just amazing.

Great article. One question I had, not to diminish this hard work, is why not use a different implementation like ECL which is pretty portable already and can compile to static C code which can just be compiled traditionally for the target? I've been doing that for a Wasm + SDL2 game in Lisp and it's been (relatively) straightforward. I suppose performance might be a issue.

Because, as you guessed, no implementation other than SBCL comes close to the performance needed.

How is this thing useful? Does it mean you can use the Trial to develop games completely free hassle on NS after its done, and you don't need a popular engine like Unity, Godot, Unreal?

Awe-inspiring work Shinmera and Charles.

Does anybody know what is the status of Trial on Mac OS? Specifically Apple Silicon

I don't have a mac, let alone a silicon one, so I can't test on it (I also have no patience for Apple's BS). However, it should work. At least SBCL itself runs, and Trial is mostly portable code, so it should, too, modulo some regressions.

I probably missed it, but since the switch doesn’t have a keyboard how do you handle text input ?

The Switch can support a USB keyboard, which would be the nice way to do it. There's already a fair number of officially published games with keyboard and/or mouse support, including a couple of programming ones. It has an on-screen keyboard too of course but you wouldn't want to rely on that more than absolutely necessary.

It does have a HID keyboard API, but typically you're expected to present an on-screen keyboard.

Over the network most likely?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: