Hacker News new | comments | show | ask | jobs | submit login
Porting a NES Emulator from Go to Nim (hookrace.net)
158 points by def- on May 1, 2015 | hide | past | web | favorite | 81 comments



> As I really liked fogleman's NES emulator in Go I ended up mostly porting it to Nim. The source code is so clean that it's often easier to understand the internals of the NES by reading the source code than by reading documentation about it.

Win!


Went to read your code after that comment, it is truly, very clean code! :+1:


Let me get this straight. We have an emulator for 1985 hardware that was written in a pretty new language (Go), ported to a language that isn't even 1.0 (Nim), compiled to C, then compiled to JavaScript? And the damn thing actually works? That's kind of amazing.


The amazing thing about computer software, relative to every other human endeavour: once you've got one component right, it's right; you can treat it like a simple little node in an even more complex design, and scale it up 1,000,000x, and it'll keep working every single time.

Once you've built a C-to-JS compiler, and a Nim-to-C compiler, you've got a perfectly-functioning Nim-to-JS compiler. There's no game of telephone gradually confusing things; the fidelity is perfect no matter how deep the pipeline goes.

(This is also what's amazing about layered packet-switched networks, come to think of it. The core switch transferring data from your phone to AWS doesn't need to speak 4G or Xen SDN; it's just coupled to them, eventually, by gateways somewhere else, and it all just keeps working, one quadrillion times per second, all over the world.)


It's so funny that you say that, because you are describing the way that I wish the world works, but I have found that in practice it rarely does.

For example, here are a list of ways in which the C-to-JS compiler is far less than "perfect": http://kripken.github.io/emscripten-site/docs/porting/guidel...


That's what I was really trying to get across—the difference between "imperfect" in other engineering, and "imperfect" in software. In other engineering, "imperfect" means "will randomly fail 0.00000001% of the time."

In software, though, "imperfect" means "will work perfectly, 100% of the time, even after a quadrillion tests, within a well-specified subset of all possible inputs; will fail in arbitrary unknown ways outside of that subset."

It's a difference to being "within tolerance"—even within tolerance of a physical system, stresses are still going on. Physical systems have a lifetime, and repair procedures, for a reason.

But in software, you don't have to worry about "stress within tolerance." In fact, even if someone else builds an "imperfect" software system that accepts inputs with undefined results, you can just wrap it with an input validator, and suddenly it's a well-defined-100%-of-the-time system!

(Of course, in implementation, software has to run on hardware, which is the other kind of imperfect system. But, surprisingly, you can write software to compensate even for that, with failover and quorum-agreement &c.)


> you can treat it like a simple little node in an even more complex design

Unix Philosphy right there (right?)


More like engineering philosophy


What's that obligatory talk link where he describes that everything will be compiled to JavaScript and run in the browser over the next 50 years?



Perfect, thanks!


You're part of the post now!


Old hardware is not necessarily hard to emulate. Often it's the opposite.


I agree it's not the hardest part, it just makes the overall picture more delightful to imagine.


This title covers the essence of hacker news pretty well.


Also see: https://news.ycombinator.com/item?id=7745561

My favorites:

> I decided to re-implement Javascript in Javascript. It failed. Here is my story

> ReactOS running IE6 in a JavaScript x86 emulator: we put a browser in your browser so you can browse while you browse

> Introducing js.js: a JIT compiler from JavaScript to JavaScript


Did you port from Go to Nim by hand, or was it automated in any way?

I thought that Go would be the last language I'd write by hand. Previously I wrote C++, which was a dead end in that I could never use tools to parse it and translate to a new language. But with Go it should be much easier to do that if/when I ever decide to switch to something else.

The performance of the emulator in browser (compiled via emscrimpten) is very impressive! It felt like solid 60 FPS to me. I wonder how the Go version compiled via GopherJS would compare? Have you tried?


I did it by hand, changing the code as I saw fit and learning as much about the NES as I could.

I haven't tried GopherJS, but I don't have much Go experience apart from reading it. On a related note, I read that porting the Go version to Android would be difficult and is still far in the future: https://github.com/fogleman/nes/issues/7


There is also a NES emulator written entirely in JavaScript called jsnes[1] that would be interesting to compare to. The "solid 60 FPS" doesn't always translate to mobile. For example, jsnes runs fine on an iPhone 5S (64-bit processor), but stutters on an iPhone 5.

[1] https://fir.sh/projects/jsnes/


Forget about mobile, it runs at like 5 FPS on my Core i3 laptop.


Well this is new. I've never seen Nim before. What does it offer that Rust, Haskell, Erlang, etc... do not?


Two of my old posts may answer that question:

- http://hookrace.net/blog/what-is-special-about-nim/

- http://hookrace.net/blog/what-makes-nim-practical/

Summary by benhoyt: https://news.ycombinator.com/item?id=8822918

  * Run regular code at compile time
  * Extend the language (AST templates and macros);
    this can be used to add a form of list comprehensions to the language!
  * Add your own optimizations to the compiler!
  * Bind (easily) to your favorite C functions and libraries
  * Control when and for how long the garbage collector runs
  * Type safe sets and arrays of enums; this was cool:
    "Internally the set works as an efficient bitvector."
  * Unified Call Syntax, so mystr.len() is equivalent to len(mystr)
  * Good performance -- not placing too much emphasis on this,
    but it's faster than C++ in his benchmark
  * Compile to JavaScript


Thanks, that first post is really interesting. Two things stand out to me:

* Run regular code at run time - does this just error if you try to use const on something that depends on a runtime value?

* Add your own optimizations to the compiler - isn't this very dangerous, especially if optimizations from various sources of code get combined? Even if your optimizations are valid (which there seems to be no guarantee of), computer math is notorious for being different from real math. People unaware of the nuances of integers rolling over or floating point seem like they could easily shoot themselves in the foot here.


> * Run regular code at run time - does this just error if you try to use const on something that depends on a runtime value?

Yep. For example `const foo = stdin.readLine()` would result in "Error: cannot evaluate at compile time: stdin".

As for your second point. I haven't used this feature personally but I have heard that the compiler will tell you exactly where these optimisations are applied. You can also disable these optimisations very easily during compilation.


For reference, compared with Rust:

    * Yes, with compiler plugins
    * Yes, with macros
    * Maybe. I think that you could use a compiler plugin
      to do arbitrary transformations on all the code in a
      module, but I haven't seen an implementation.
    * Yes
    * What garbage collector?
    * Yep
    * Kind of. Any method can be invoked as a function with
      the <type>::func(val, args...) syntax, but the 
      opposite's not true.
    * Check
    * Yep, via Emscripten


Rust isn't working with Emscripten yet.



That's a great list, but the question was about stuff that's not in Haskell (among others) so a few nitpicks:

* Haskell can run regular code at compile-time (But it is slow, using ghci/TH)

* Haskell can add optimizations (rewrite rules and plugins). I think Haskell innovated rewrite rules and Nim probably was inspired by that?

* Binding to C easily is also a Haskell feature

* Type safe sets and arrays of enums: Is this not just a library? Haskell sets and arrays of enums are type-safe by default?

* Unified call syntax is true of Haskell too

* Compile to Javascript -- Haskell can do that too


I'm a big Haskell fan (it was my favorite language before discovering Nim), but here's my personal list of Nim advantages over Haskell:

    - Predictable performance (lazy evaluation is problematic)
    - Easy to achieve high performance
    - Easier to read, even for non-specialists
    - I'm much more productive in it
    - Multi-paradigm: Nim is mostly imperative, but OO and functional can be mixed as well.
http://hookrace.net/blog/conclusion-on-nim/


> I'm much more productive in it

That's the point! I know many languages and Nim is by far the most productive one. It's like coding in Python with the assurance that the compiler will catch many errors that would cause runtime errors in Python. That assurance makes coding in Nim faster than in Python because I don't have to think so much about avoiding runtime errors.


Thanks.

Does Nim compile to C99 ?

If someone were to ask what's wrong with Nim, what would you say are the things that need to be fixed?


C89, so it's also compatible to Microsoft's vcc.

There are quite some compiler bugs remaining in Nim and the standard library could be improved.


I find the easiest way to think of Nim is "A Python-style language that compiles to C". Of course, the truth isn't as simple as that. But I like it, especially how pragmatic the language is. It doesn't try to "prove" a philosophy like Rust, but I think it gains other advantages in return.


I'd say that it feels more like a scripting language in comparison to Rust and is more approachable than Haskell or Erlang while being fast since it's compiled to C.


being compiled to C doesn't imply fast :)


It does have other advantages though, like inheriting C's excellent portability.


Unless done badly, it does.

Transpiling to C gives you the opportunity to emit code that is straightforward clean and fast C for most operations, which avoid an abstraction overhead.


No, it doesn't. You could easily compile Python into C (basically, "just" inline the core interpreter loop), but it will still be slow, because the Python semantics still require a lot of dictionary resolutions on every operation. Getting efficient C still requires you to think about getting efficient code. You can not get it "for free" just by waving a C compiler over your language. Otherwise, everyone would just do that and there would be no languages slower than C.

Before you reply to disagree, make sure to very, very carefully think through the implications of the last sentence there. It is not as if compiling to C is particularly hard; it is literally "homework assignment for a normal college senior in a compilers course"-difficult. Many languages have gone through a phase where they compile to C. Usually they deliberately leave C at some point, another thing to carefully ponder the implications of before replying in disagreement.


>No, it doesn't. You could easily compile Python into C (basically, "just" inline the core interpreter loop), but it will still be slow, because the Python semantics still require a lot of dictionary resolutions on every operation.

Yeah, and you could do this too at the start of your emitted code from the transpiler:

sleep(UINT_MAX)

which would also make it slower than an interpreted language. But I specified I wasn't talking about that, or about "just inlining the core interpreter loop", but about transpiling letting you also emit straightforward and clean code without extra abstractions for most operations.

A new language like Nim doesn't come with all the baggage of Python's runtime pre-existing, and doesn't need to recreate it.


HipHop doesn't support the entirety of the PHP runtime for exactly the reasons that have been laid out for you.

That's reality from a company with a shitton of money and a vested interest in doing it successfully.

They can't do it in the general case because reality disagrees with you.


Not sure what are you getting at. What I didn't say: transpiling is magic fairy dust for speed, and you don't have to care about language semantics at all.

What I said: "unless done badly" (my words on the original comment), it's fast, as it gives you a chance to emit straightfoward C code for most operations. Now, regarding your example:

First, HipHop is not transpiled to C. It's a runtime with a JIT.

Second, what does "HipHop not supporting the entirety of the PHP runtime" has to do with anything?

I take it you mean that the fact they had to skip some parts of the runtime for increased speed, proves something related to what I wrote. But (besides the JIT thing) PHP is not greenfield like Nim. They have a runtime, and it has to work in a certain way. PHP wasn't designed with transpiling in mind, and HipHop had to follow it closely as a design goal. I never said runtime decisions you have to mimic to be 100% compatible can't slow you down.


http://en.wikipedia.org/wiki/HipHop_for_PHP

> HipHop for PHP (HPHPc) is a PHP transpiler created by Facebook. By using HPHPc as a source-to-source compiler, PHP code is translated into C++, compiled into a binary and run as an executable, as opposed to the PHP's usual execution path of PHP code being transformed into opcodes and interpreted.


OK, I had in mind the JITed HipHop VM that they tout now (later development it seems).

Still, how does the original version of HipHop as a transpiling compiler invalidate what I said?

I said (check my original comment): unless you do it badly, transpiling to C gets you to run faster, because it lets you translate most operations to straightforward and fast C code.

And that's exactly the logic they followed and what they achieved: they transpiled PHP to C, as opposed from running it with PHPs runtime interpreter, in order to make it faster, and it worked.

That they skipped some parts of PHP runtime behavior/semantics, which if they didn't it would slow things down, doesn't clash with what I said. In fact it's already covered in my comment: "it let's you translate MOST" (not all) operations to fast C code.


> transpiling to C gets you to run faster

You're changing the context.

> while being fast since it's compiled to C. - them

> being compiled to C doesn't imply fast :) - me

> Unless done badly, it does. - you

You've switched to "faster" because it's a more defensible position, but that isn't the idea that was originally being responded to. That's why jerf said the following, emphasis mine:

> No, it doesn't. You could easily compile Python into C (basically, "just" inline the core interpreter loop), __but it will still be slow__


You're really providing further evidence for my pet theory that anybody who seriously uses the word "transpiler" has no clue how compilers work, or what they can and can not do. (This is on the theory that anybody who does have that understanding also understands that "transpiler" is a etymological solution to a problem that didn't exist... the word "compiler" already covered the bases completely and totally... if you actually understand them.)


> while being fast since it's compiled to C.

Which is fast compared to what? Being compiled to JS?

It's not like being compiled to a "fast language" automatically makes that compiled program fast. You also have to consider how complex the runtime is, and other variables which are probably a direct result of the semantics of the language.


It's closer to Go than Rust since it is garbage collected. It's statically typed, has OO features and first-class functions. It compiles to C making it pretty portable.


Has generics, meta-programming and GC can be disabled (or tuned to run up to some max # of milliseconds).

So it is Go++ if you wish.

But I am not sure about its concurrency. Go does have channels which are nice abstractions for doing concurrency right.


Nim has channels too (although I think they need replacing). There is also spawn which is similar to Goroutines in some ways: http://nim-lang.org/0.11.0/manual.html#parallel-spawn-spawn-...


You can have manual control over memory if you want, though. This worked the last time I looked at the language, at least:

    foo = cast[ptr Foo](alloc(sizeof(Foo)))
    # do stuff with foo
    dealloc(foo)
EDIT: Here's the documentation on the site itself. Most things are 'traced' by the GC, but by using alloc and dealloc you can work with 'untraced' (i.e. manually allocated) memory: http://nim-lang.org/0.11.0/manual.html#types-reference-and-p...


Go has unsafe.Pointer too.


The binary size difference is quite striking. Linux distro packagers are going to like Nim, I think.


I couldn't tell if the Nim result was actually statically linked. If it was, then it almost certainly wasn't using glibc, because the binary would certainly be much fatter.


The Nim binary dynamically links against glibc and SDL2. If you really want to statically link against a C standard library, musl works just fine:

    @if musl:
      passL = "-static"
      gcc.exe = "/usr/local/musl/bin/musl-gcc"
      gcc.linkerexe = "/usr/local/musl/bin/musl-gcc"
    @end
Now it compiles like this (SDL2 is still dynamically loaded, +1 MB for that):

    $ nim -d:release -d:musl c src/nimes
    $ ls -lha src/nimes
    -rwxr-xr-x 1 def users 219K Mai  1 22:00 src/nimes*


If the Nim binary is dynamically linked, comparing the size to a Go binary isn't always valid. There are reasonable trade-offs to consider when selecting dynamic vs static linking. Admittedly, when you're running an NES emulator on your desktop probably isn't one of them, you're not likely to care either way.

The Nim -> C -> emscripten path is very impressive. Kudos.


Go also dynamically links glibc by default on linux, for the domain name resolver functions.

I'm not sure what Go does on OS X, but I know that it's not possible to statically link the c library on OS X.


I thought gc only statically links and it was gccgo that had the ability to dynamically link.


gc only statically links to compiled go objects, and (may have changed recently? or soon?) only dynamically links to c libraries (via cgo).


Ubercool.

Question about nim: from looking at https://github.com/def-/nimes/blob/master/src/nes/cpu.nim , I wonder: is there way to give the no. of cycles and instruction encoding with the "op" template, so those 256 byte arrays get built automatically?


The problem is that an operation can have variable number of cycles depending on what addressing mode it uses as each opcode.


I'm not the author but I don't see why not, you can just have an extra param to the "op" template.


But how would you put it in the right place in the table? Can you just have something like

    cycles[opcode] := ncycles
run during compile time in the macro, resulting in a populated 256-byte table of all the opcode cycles and no anything done in runtime?


I'm not sure exactly what you're asking, but I can answer at least part of it. You can build lists at compile time in Nim via the `static` statement or `compileTime` pragma. Eg:

  # using the pragma here, but we could use a 'static' block instead
  var cycles {.compileTime.}: int

  # ---

  proc doSomething =
    static:
      # invoked once at compile-time (at this definition)
      cycles += 1

  proc doSomethingGeneric[T] =
    static:
      # invoked once at compile-time (per unique generic call)
      cycles += 1

  macro doSomethingAtCompileTime(n): stmt =
    # invoked at compile-time (per call)
    let ncycles = int n.intVal
    cycles += ncycles

  # ---

  doSomething() # this call doesn't effect 'cycles', it's declaration does (+0)
  doSomething() # ditto (just here to prove a point) (+0)
  doSomethingGeneric[int]() # this call effects 'cycles' (+1)..
  doSomethingGeneric[int]() # ..but only once (+0)
  doSomethingGeneric[float]() # this call also effects 'cycles'  (+1)
  doSomethingAtCompileTime(5) # this call effects 'cycles'  (+5)
  doSomethingAtCompileTime(12) # ditto  (+12)

  static:
    echo cycles # prints '20'
I'm not sure this helps solve anything in NimES, but this can be really useful for meta-programming in general. For instance I'm using it to make an event system which generates efficient dispatch lists based on the types/procs defined. It's designed for game logic, to both minimize boiler-plate and dynamic dispatch. Ie, invocation it's type-specific and usually does not use function pointers per-instance, so smaller 'events' can be inlined. Plus, instances are generic (decoupled from object inheritance), and often don't require any header data. That combination should, in theory (still WIP), give devs a 'universal' way to model a broad range of game objects from heavy single-instance types to lightweight projectiles and particles. Here's an example for a little clarity:

  # note: 'impl', 'spawn', and 'invoke' are macros
  
  type
    Foo = object
      val: int
    
    Bar = object
      val: string
  
  impl Foo:
    proc update = echo "Foo: ", me.val
    proc render = echo "Rendering Foo!"
  
  impl Bar:
    proc update = echo "Bar: ", me.val
  
  spawn Foo(val:123)
  spawn Bar(val:"abc")
  
  invoke update
  invoke render
  invoke blah
  
  # ouptput:
  #   Foo: 123
  #   Bar: abc
  #   Rendering Foo!
  #   Invoke Error: No event 'blah' defined.
Perhaps that's not the best example to illustrate Nim's meta capabilities, but so far Nim is the only language I've come across that allows me to achieve this short of thing (at least, directly from user-code).


Thanks, seems like the thing I'm looking for.

If you look at line 531 of the CPU source code (as of today, anyway), there are multiple 256 byte tables that give instruction encodings, lengths, and cycle counts.

What I was asking is: "Is it possible to put these as a parameter of the 'op' macro so that it builds these tables automatically"

The answer might be "No" if e.g. a single op has multiple instruction encodings. But assume that there is only one encoding per op.

An equivalent question is - can static: sections assign to an array that will be available at runtime? From your example, the answer appears to be yes.


> can static: sections assign to an array that will be available at runtime?

The answer is yes, but it's a tad trickier than just accessing the compile-time list from run-time code (which doesn't make sense, and is illegal). Instead, use a macro to generate a bunch of run-time checks against a specific value. Eg:

  var eventNames {.compileTime.} = newSeq[string]()
  
  proc defineEvent(name:static[string]) =
    static:
      eventNames.add(name)
  
  macro checkDefined(name): stmt =
    # begin new statement
    result = newStmtList().add quote do:
      echo "Checking for '", `name`, "'"
    
    # loop over every known event name and
    # build a run-time 'if' check for each one.
    for n in eventNames:
      result.add quote do:
        if `n` == `name`:
          echo "Found it!"
  
  
  # add some events to compile-time list
  defineEvent("foo")
  defineEvent("bar")
  
  # define some runtime values
  let eventName1 = "foo"
  let eventName2 = "blah"
  
  # check runtime values againts compile-time list
  checkDefined(eventName1)
  checkDefined(eventName2)
  
  # output:
  #   Checking for 'foo'
  #   Found it!
  #   Checking for 'blah'
Note: This will inject a bunch of 'if' statements for each call to 'checkDefined', which might bloat your code.. it's probably better to make a macro which defines a proc, then just call that to check run-time values.. but I left those kinds of details out of this illustration for the sake of simplicity.


Thanks. I'm sure there's a way to promote a compile time seq into a constant runtime one. Might require some more macro trickery, though.


Err... what you said just reminded me of something, and I realized all the code I just showed you is really over-complicated and that Nim has much more straight forward options using `const`, like this:

  static:
    # define a compile-time list first
    var names = newSeq[string]()
    
    # add some values (at compile-time)
    names.add("foo")
    names.add("bar")

  # define the compiler vars as run-time const
  const runtimeNames = names

  # define some run-time variables
  var name1 = "foo"
  var name2 = "blah"

  # check runtime variables against const variable
  if runtimeNames.contains(name1): echo "Has Foo!"
  if runtimeNames.contains(name2): echo "Has Blah!"
Sorry about the rather winded (and bad example) replys :| But thanks for the conversation, it reminded me of this and now I have some cleaning up of my own code to get too. Cheers!


Any plans to do let people call Nim functions from Python with Python standard objects like strings/dicts/lists as arguments? This would let people write the fast parts in Nim and slow parts in Python.


Hi yes, I'm a Nim community member who's working on that.

A simple version already exists and works (for Python primitive types and Numpy arrays, via the Python C-API), but it's embedded in my company's proprietary Python+Nim (mainly Python) codebase. I'm working in my spare time to extract the relevant code as a Nim library and release it as an open-source package on Github.

If you'd like to learn more about it, or you'd like to be notified when the first release is ready, please come and discuss it on the Nim forums! http://forum.nim-lang.org/


What's the point? Why not write everything in Nim?

If you depend on certain Python libs that's understandable. But it should not be too hard to translate them to Nim since the syntax of Python and Nim are not very different.


If a company has a significant amount of existing code in Python, no sensible engineering manager will agree to a complete re-write of the existing code-base. It would divert resources from more-pressing functionality (new features, bugfixes, etc) and almost certainly introduce its own new bugs due to the complete re-write.

(See also this classic Joel Spolsky article, "Things You Should Never Do, Part I": http://www.joelonsoftware.com/articles/fog0000000069.html )

This would be the case whether you schedule the re-write in a single blocking development effort (in which case, all forward progress would stop during that time) or broken into batches over time (in which case, it will be much longer until the new system is ready, and the old system will be a moving target as it continues to be developed).

Instead, the chances of a Nim-integration being beneficial to the company (and thus, your chances of getting approval from your engineering manager) are MUCH higher if you can simply write NEW functionality in Nim (or occasionally re-write small, self-contained inner loops in Nim) and the new Nim code integrates smoothly into the existing Python as a Python module.

This is the approach I've taken at my company (with my engineering director's approval).

In theory, you could even use skunkworks-Nim in your large Python codebase, as long as your Nim code presents itself as a good-citizen Python module, much like the tales of skunkworks-Scala being used in large Java codebases.


I am not talking about a complete rewrite of big business software written in Python. I just had small projects in mind which are typical for Python. Big business usually depends on Java, C# and C++ but not on Python.

http://www.quora.com/Why-do-large-corporations-use-Java-or-C...


Yeah, Python has a huge ecosystem of amazing libs. Many are actually C libs with Python bindings, so they're fast. It would be very time consuming to port them to another language.


When is it appropriate to use Nim instead of Cython to rewrite hot code?


I'm quite impressed about the small amount of code required for a NES emulator. I thought they'd have to do all kinds of special casing for cartridge-specific stuff…


The more accurate the emulator is, the less special casing there's a need for: special cases/hacks are necessary when the emulator takes shortcuts and doesn't implement the hardware features the game relied on. If the hardware features are fully implemented all games ought work without hacks. Hence bsnes (as far as I know) not needing game-specific hacks but requiring a hefty config to run.


The problem with the NES is that games have all kinds of different hardware inside the cartridges. In order to achieve high compatibility you need to emulate all of these chips. It's not nearly enough to accurately emulate just the NES itself.


Not all games work. Only the most popular mappers are implemented, which covers about 85% of games.


Many cartridges came with custom hardware, you have to implement them specifically, which implies special casing for the hardware, which sometimes implies special casing for the individual game.


Indeed, I didn't think it could be so simple either. Figured "what the hey" and thought about writing one in perl, but a few already beat me to it looks like: https://metacpan.org/pod/Games::NES::Emulator


For anyone who was wondering what I was: porting from libSDL calls to drawing on an html canvas is done automatically by emscripten




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: