Hacker News new | past | comments | ask | show | jobs | submit login
Visual 6502 in JavaScript (visual6502.org)
155 points by kijduse 12 days ago | hide | past | web | favorite | 20 comments





I recently wrote a "remix" of visual6502 (just for fun), with C (and a bit of C++), compiled to WASM to see whether the rendering performance of the chip visualization can be improved while still running in browsers, and also to improve the "UX" a bit:

https://floooh.github.io/visual6502remix/

Check Help -> About for a list of dependencies used in that project (lots of good stuff in there), the two most important being the original data sets from visual6502, and a C re-implementation of the transistor-level simulation, called perfect6502 https://github.com/mist64/perfect6502)


Does yours have a clock speed readout? Can’t seem to find it.

No display for that, I was mostly interested in the single-stepping capability for investigating the chip behaviour and validating against my CPU emulators.

But when clicking the "play" button it's throttled to one half-cycle per 60 Hz display frame (requestanimationframe) so "usually" it should run at 30 Hz.

I haven't checked how fast the WASM version would run unthrottled compared against a natively compiled version of perfect6502, but performance should be somewhat close (much closer than to the JS version anyway).

As far as I have seen, the C rewrite in perfect6502 uses a handful compact arrays for the simulation state, unlike the Javascript version which seems to be more like a huge graph of linked nodes, where each node is a JS object, so the C version should be a lot more cache-friendly.


This page was confusing to me until I followed the Github project page and saw this "Transistor level 6502 Hardware Simulation in Javascript". Why this same sentence couldn't be anywhere on the demo page though, is a mystery.

A bit offtopic, but I'm constantly annoyed by applications using 'x' and 'z' for related operations like zoom in and out in this case.

The reason is, German keyboards use the QWERTZ layout and as you can tell from the name, the 'z' key is in the upper row, right in the middle.

Maybe use 'w' and 's' instead? That's the default in first-person-type games. Actually, never mind, that doesn't work for the French who have AZERTY...


Since mouse dragging works fine, I expected the scroll wheel to control zooming. In fact, that was the first thing I tried before reading the instructions.

When this came out in 2010/11 (not sure when I first heard about it) it blew my mind.

However, I was really, really hoping that we'd have a version for the Z80 by now.


Here you go :)

http://www.visual6502.org/JSSim/expert-z80.html

But AFAIK nobody really knows yet whether it works in all situations, because not all of the "trap transistors" had been found yet which the Zilog designers put in to make reverse engineering harder.

...maybe it would have been better to decap one of the "unlicensed clones" of the Z80, like the East German U880, because that definitely had the trap transistors fixed ;) The U880 had some minor differences in the undocumented behaviour too though.



Really cool, I remember seeing it over 8 years ago from here too ! :D

http://www.visual6502.org/JSSim/expert.html

a nice throwback for sure!

(https://news.ycombinator.com/item?id=4108557)


Is there any information on the program the simulator runs by default? I couldn't find anything in the user guide or the FAQs.

Here’s the source code (or at least comments about the assembled op codes) to the example program: https://github.com/trebonian/visual6502/blob/master/testprog...

It looks like that subroutine they call at $0010 has a special hook that writes to the console whenever they write to memory address $0f.


It's an infinite loop that increments/decrements registers. This emulator is too slow to run real-world programs.

Visual 6502 is a godsend for emulation. For a brief time I dabbled in emulating the 6502 and every question that couldn’t be answered by the manual was answered by this.

Couldn’t live without it <3


Any chance to compile some benchmarks (spec int) for this and see how well it compare to the original silicon?

You can actually check on the webpage when the simulation is running: on my machine it shows around 17 Hz, so it's about 60000x slower than a 1 MHz 6502.

For comparison, the C reimplementation of the transistor level simulation, running unthrottled and without visualization (I think that's the main performance killer) is about 150x slower on a modern CPU (according to the readme here: https://github.com/mist64/perfect6502)


Is that 150x slower based on single core?

Yes, but IMHO spreading the simulation over multiple threads would be quite a challenge. As far as I understood, the simulation essentially starts with an initial state of high/low nodes/paths, and then for each node, switches each connected node throughout the node-graph accordingly, until the entire chip simulation "settles down", and then moves on to the next node.

Maybe this linear algorithm can be converted to some sort of parallel "cellular automata", which would then probably be a much better fit for GPUs than CPUs.


This is just completely crazy, thank you for posting.

Reminds me of all those MineCraft computers lol. <3



Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: