Hacker News new | past | comments | ask | show | jobs | submit | gergoerdi's comments login

Debugging via high-level simulation is something my book spends a lot of time on. If you look at the sample chapters, you can see that the same Clash code can also be compiled into software Haskell, which you can then interface with non-synthesizable test benches, such as an interactive SDL frontend. So you can take your HDL logic and run it directly, interactively in real time. You can play Pong by compiling the code as software.

One level lower, you can use Clash's signal-level simulator. Basically it gives you a synchronous stream of signal values, either as a lazy list (for "offline" simulation), or as an automaton that you can turn the crank on by feeding it the next clock cycle's inputs (for "online" simulation, i.e. where you want to do IO to compute the next input from the previous outputs). So at this level, you'd take your Pong circuit and use the automaton interface of the simulator to feed the virtual "pushbutton" states computed from e.g. keypresses, and then consume the output to do the rendering. Or simulate the whole circuit end-to-end and feed its output into a VGA interpreter, which you also get to write in Haskell.

If you need to debug at the Verilog level, you can use Clashilator (https://github.com/gergoerdi/clashilator) to automate FFI-ing into a Verilator-generated simulation.


Your website is of course referenced in the Compucolor II chapter of the book. It was invaluable in getting my Compucolor II implementation working. In fact, I even sent you a PR to fix the TMS 5501 chip's behaviour to match its datasheet (https://github.com/jtbattle/ccemu/pull/2/).

I chose the Compucolor II for its simplicity. Its original design goal of cheapness via very small number of components translates is a big win for my purposes because implementing many small special-purpose chips would bloat the book considerably, without adding too much extra value. With the Compucolor II, we can just take the Intel 8080 core from an earlier chapter, implement two custom chips, take the UART from another earlier chapter, and boom done.

In fact, I chose the Intel 8080 in the first place instead of the more widely used Z80 for the same reason: adding the Z80 extensions wouldn't bring anything new to the table, but would increase cruft. Turns out there's a small but reasonable number of 8080-based home computers (https://retrocomputing.stackexchange.com/q/11682/115).


In the book (see the sample chapter 8 at https://unsafeperform.io/retroclash/#samples) we create a proto-almost-game (just a bouncing ball) first by directly wiring together signals.

However, the resulting circuit description is much harder to understand and extend than a more structured approach, so we rewrite it in a more principled manner by decomposing it into two parts: a `Input -> State -> State` circuit used as a register transfer enabled by the start of the vblank, and a `State -> Coordinate -> RGB` circuit connected to the video output signal generator. This has the added benefit that we can compile the same description as software Haskell instead of hardware Clash, and so we can use high-level simulation to run the bouncing ball in an SDL window.

Sample chapter 9 then creates a Pong game by just changing these two (pure, Haskell) functions slightly. With minimal changes, we go from idle animation to playable game!



That's cool! I wanted to avoid having to build Rust and/or LLVM from source myself, hence the somewhat awkward "tell Cargo we're on default target, let Clang sort it out at link time" setup.


From what I understand, LLVM-MOS treats large parts of the zero page as virtual ("imaginary") registers, so you have no shortage of that (https://llvm-mos.org/wiki/Imaginary_registers). Then, sufficiently advanced compiler technology improves the stack situation (https://llvm-mos.org/wiki/C_calling_convention).


6502 assembly has the distinct advantage of having special page-0 instructions for reading/writing from memory, including, if I recall correctly, the ability to take a 2-byte sequence and treat it as a 16-bit value (or was that in the AppleSoft ROM?)


The main way to do pointer indirection (without self-modifying code) is to use the zeropage-specific indirect addressing modes, which use a 2-byte address stored in zero page as a pointer to a byte in memory. (And on the original 6502, the only available addressing modes for this forced you to use the X or Y register as an index, so you had to set it to 0 first!)


You can treat 2 bytes (not just in the zero page, though) as indirect jump addresses, yes.

Doing something like "JMP ($2345)" will jump to whatever $2345/$2346 is pointing to.


It's a little amazing how much 6502 assembler sticks with me 35 years later.

But only a little. I didn't have the money to buy an assembler or the skill to write one so I would write out my programs in long-hand on graph paper and hand-assemble them before entering hex codes manually. While not the most efficient process, it did do a good job of encoding things into long-term memory.


Haha, yes, I can relate. I didn't do any 6502 coding for ~25 years and it mostly just stuck around. Apparently it's like riding a bike.

In the meantime I've forgotten most of the 68000 and z80 instruction sets.


Did you look at chirp8-engine, or only chirp8-c64? The value add is not in the parts that interface with the C64 internals; probably using C for that would make for nicer code. But I wanted to push as much into Rust as I could in the short amount of time I spent on this.

The real advantage of using Rust is in the actual program logic. E.g. the instructions are decoded into an algebraic datatype (in https://github.com/gergoerdi/chirp8-engine/blob/7623353a8bf0...) and then that is consumed in the virtual CPU (https://github.com/gergoerdi/chirp8-engine/blob/7623353a8bf0...). Rust's case-of-case optimization takes care of avoiding the intermediate data representation at runtime.


They could put a tiny microcontroller that has a USB HID host and translates to PS/2 connected to the FPGA. So you have a USB socket on one side, you plug your normal USB keyboard or mouse into it, then that socket is connected to the microcontroller, and there are two wires (PS/2 clock and data) connecting the microcontroller to the FPGA.

Some hobbyist and educational FPGA dev boards do that, because handling PS/2 on the FPGA is so much easier than raw USB. Here is one example: https://reference.digilentinc.com/reference/programmable-log...

"The Auxiliary Function microcontroller (Microchip PIC24FJ128) provides the Nexys A7 with USB Embedded HID host capability. "


That's a nice idea. I suspect they must really be trying to keep the BOM cost down though.

I was struck by just how little there is on that board other than the Spartan-6 FPGA; which looks to be one of the smallest versions of what is now a 10 year old design.

There's one SOP/SOIC-8 package by the FPGA which I assume is configuration flash. One 50MHz crystal near to that. Some power regulation on the right-hand side of the board. One Alliance SRAM package. And that's it other than the passives and connectors!

I guess it's possible there's some interesting stuff on the other side but I'm doubting it.

EDIT: Wow I see they are selling this for GBP 300; which is over USD 400. You really aren't paying for the hardware; this has less content than a USD 70 dev board.


You can just buy an off the shelf converter like this one on Ebay: https://www.ebay.com/itm/254740257221


You can assume that USB mice are going to become increasingly unlikely to work with those as time goes on - they're just passive adapters that instruct the mouse "hey, someone is trying to plug you in with an adapter to a PS/2 port" and hoping that the mouse knows how to switch protocols.

As PS/2 becomes a deep legacy standard, the likelihood is that mouse manufacturers will simply stop bothering to include that capability.


But for now it should work and is easy.


HoTT isn't a programming language, because there are non-value normal forms. That's the whole reason behind research into various formulations of Cubical Type Theory, which is a programming language.


Intensional intuitionistic type theory is a programming language. Throw in higher inductive types (still a programming language at this point) and Voevodsky's univalence and you've got HoTT. Then sure, the simplicial model is not constructive, whereas the cubical models give computational meaning to univalence, but they're still just that, models of HoTT. So would you prefer I had said HoTT has models that are programming languages?


I think at this point, Haskell is the most likely to become the first mainstream PL with Pi types: https://gitlab.haskell.org/ghc/ghc/-/wikis/dependent-haskell


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: