Hacker News new | past | comments | ask | show | jobs | submit login
Clock Signal: A latency-hating emulator of 8- and 16-bit platforms (github.com)
105 points by AnthonBerg 7 days ago | hide | past | web | favorite | 23 comments

So how does this fit into the world of cycle accurate emulation? Seems like this video approach is new of emulating the decay of a CRT?

The screenshots looked closer to what a CRT looks like but still seemed more pixelated than I’d expect.

Phosphor-decay simulation isn't new - xscreensaver has had it for ages. What is new, and slightly mindboggling, is that they're going below "treat display as pixels and post-process them" and generating an actual PAL signal.

I've not checked to see whether they're keeping that in frequency space or actually emitting samples at some rate above 4MHz. But it's an interesting approach.

https://github.com/TomHarte/CLK/blob/master/Outputs/CRT/CRT.... seems to be well-commented.

Ah! Cool!! If they're generating an actual PAL signal I guess that explains why the picture in the emulated Macintosh Plus started to jump around like a bad VHS tape when I set the emulation speed too high for the host to keep up.

Edit: I confirm that the code is really pleasant to work on and understand! Found this project by pure accident this morning and was able to hack meaningfully on it right away. Kudos to the developer.

Yes - it looks like the system advances a certain number of scanline elements per CPU clock, so if you run the CPU clock faster than the video pixel clock you start missing elements.

Hi, it's the author here. I'm new to YCombinator, so please forgive any etiquette transgressions.

I've been through a few implementations of this; originally each machine provided data in any format it liked plus the GLSL necessary to decode that into a time-domain composite signal. Then Apple deprecated OpenGL so I retrenched to supporting a few fixed pixel formats, which are in the InputDataType enum in ScanTarget.hpp. Based upon what until then I'd found useful, it's a mix of direct sampling and frequency space stuff.

Luminance in one or eight bits, and the four RGB options in different bit sizes are standard PCM, but Luminance8Phase8 and PhaseLinkedLuminance8 are both frequency space. The former is pretty straightforward, with the latter you supply four luminance samples per output sample, but the one that's active at any given moment is a function of phase. It sounds a bit contrived, but it means that the amount of data I supply doesn't need to be a colour-clock-related multiple of what it is for some machines.

Earlier implementations of the decode were a lot smarter than the current — I used an intermediate composite buffer which was the least integer multiple of the input data rate that gives at least four samples per colour clock. To that I applied a 15-point FIR lowpass filter to separate luminance from chrominance, and then I continued from there. I actually think this is the correct solution, and I want to return to it soon.

Unfortunately I'm at the extreme shallow end of the pool in terms of GPU power as I use a 1.1Ghz Core M with its integrated graphics to power a 4k display, so 15 samples/pixel proved to be somewhat problematic and I switched to doing four evenly-spaced samples per colour clock, irrespective of the input rate, in order just to average those and try to knock out exactly just the colour subcarrier. Or, I guess, that's like the average of two comb filters. At the time I thought it looked fine, and it's still a genuine approach to decoding composite video even if it's a complete digital fiction, and it ran faster, so I went with it.

With hindsight I didn't really do enough due diligence, I think partly because I spend so much more working with PAL than NTSC.

The most prominent machine for which that approach doesn't work is the NTSC Master System; that produces pixels in-phase with the colour clock, and each pixel occupies two-thirds of a colour clock. So they alias like heck, and because it's in-phase I don't even get temporal dithering to mask the problem. I haven't yet implemented an in-phase PAL machine, so the aliasing tends to be much less prominent.

Anyway, right now I'm getting towards the end of a Qt port, so that Linux users finally get the full UI experience if they want it; after wrapping that up and with an eye on Apple's announcements of this week I'm going to have to admit that I'm really at the end of the road of being able to treat OpenGL as a lingua franca and I'm going to get started on a Metal back-end for the Mac target. I think that I'll probably also switch back to the 15-point FIR filter for composite decoding while I do, for all targets. I have a long-stale branch for reintroducing that under OpenGL which I'll seek to revive.

Also there's a couple of bugs in the current implementation that I'm almost certain are race conditions, that could do with a reinvestigation. The OpenGL ScanTarget is supposed to be a lock-free queue that rejects new data when full, but I don't know whether I've messed up with a false memory order assumption, or made an error even more obvious than that, but hopefully it'll come out in the wash. Especially if I'm accidentally relying on x86 coherency guarantees.

So, yeah, summary version: lots of room for improvement, some improvements hopefully coming soon.

I think what we're seeing is what a high-quality somewhat idealized CRT would display given a limited-quality input signal. Whereas your memories are mostly of cheap CRTs that further degrade/distort things beyond mere analog composite video signal encoding.

It's definitely an interesting approach, especially for software that relies on analog signal artifacts to smooth out their dithering and blend colors.

Right: a Professional Video Monitor (PVM) CRT—of the kind that game developers were using at the time—looks like this up close: https://d2rormqr1qwzpz.cloudfront.net/photos/2013/07/11/5007...

Notice the bottom-right image: the precisely-calibrated beam placement on the PVM’s shadow-mask, means that you see very defined subpixel-like regions; and yet those regions are themselves actually only partially lit, such that if you just filter for e.g. the red color-channel, you’ll see a clear, continuous waveform shrinking and expanding “behind” the red shadow-mask holes, where a signal that’s getting “brighter” as it travels is actually forming a widening cone of light within the subpixel. This is what “intensity” on a CRT actually means—less like making a lightbulb brighter; more like changing the aperture of a spotlight gobo.

Yes, this is caused by the electron gun spitting more electrons; but the phosphorous molecules those electrons hit can only get so excited—so the (many) extra electrons that land in the center do nothing after a point, while the (few) extra electrons that land at the edge eventually seem to widen the beam as they begin to probabilistically excite those edge regions as well. This is what happens if you “clip” the top of a Gaussian distribution: you'll see a flat top-clipped region in the center of the distribution, that expands or contracts in size, in proportion to the number of events in the distribution.

You can simulate this effect yourself in Photoshop: set a black background, then take a circular gradient of 100% white to 100% transparent (i.e. a white circle that fades to nothing), set its layer to multiply, and then copy-and-paste that layer on top of itself repeatedly. The center doesn’t get any brighter, but the circle seems to expand as the edges fill out.

Good information, and beautifully put. Thank you.

There is a screenshot of Stunt Car Racer in there! How I miss that.

That V8! Sublime!

Where is the file that defines the host keyboard to emulated joystick mapping? Like where is # for colecovision?

Author here; I'm still not sure I've done the right thing here, but for the ColecoVision you need to type something that is literally a hash, since it's mapped logically. So e.g. on my keyboard it's shift+3. But if you have a UK keyboard then you might well have a hash key.

I've probably allowed myself to stretch an early implementation decision too far here. My approach is usually to run with something good enough until it stops working and then replace it rather than to try elaborately to plan ahead; I may have crossed that threshold on joystick input.

Thanks for the reply. I really like the authentic video emulation.

The game I tried was Boulder Dash. You need to press # to start I tried every key on the keyboard. Maybe shift-3 is mapped to # on the second emulated joystick? If you could point me to the correct file I could hack on it.

Or maybe I need to install a BIOS file somewhere?

There's some ugliness to acknowledge here in the fact that user remapping of controls is not yet implemented. That being acknowledged.

The easiest place to hack would be https://github.com/TomHarte/CLK/blob/master/Machines/ColecoV... — change the Input('#') to declare a key you prefer (e.g. Input('-'), and then change the corresponding case statement within the block starting on line 62.

If you follow that approach, you'll have modified the ColecoVision so that it declares a joystick with a minus key rather than a hash key, which it's hard to imagine the host binding getting confused about.

If you'd prefer to look at the other side of the interface and figure out why your host isn't posting '#' properly then you'll see the SDL fragment at https://github.com/TomHarte/CLK/blob/master/OSBindings/SDL/m... — it takes the first character of whatever SDL provides as the string input produced by a key (and always posts only to the first joystick).

You'll see similar logic for the Cocoa port at https://github.com/TomHarte/CLK/blob/master/OSBindings/Mac/C... varying only in that if Cocoa doesn't provide typed characters then it defaults to the first fire button.

I definitely need to go back and look at this stuff again. As I said, it's sort of a model that did make sense but has become overloaded. I think I need to switch to an internal representation of joysticks and joypads that preserves some information about layout (even if it's just number of rows of fire buttons, number of fire buttons per row, whether there's a start/select, whether there's a number pad, etc) and work forwards from there.

Even in full size, the naive images look much better to me.

When people say that raw pixels look better, I point to this set of screenshots from Final Fantasy 6: https://www.resetera.com/threads/crt-shaders-scanlines-ot-be...

The raw pixels on the "bricks" look flat and kind of illegible. And it's very natural to read these "bricks" as bricks because their rectangular shape is so stark on an LCD pixel grid, and the discrete brightness-changes between the stark square pixels of each "brick" suggests roughness.

But on a CRT (or an emulation of one—note how the illusion works with basically any of the CRT filters in the post), the "bricks" seem instead to have realistic bump-maps, jutting them out of the wall slightly; and the intensity-waveform nature of the CRT makes the edges of the bricks less uniform. And yes, while the "brick's" surface is blurrier, in this case that actually makes it much more legible that these "bricks" are actually weathered stone masonry (with randomly-angular smooth-ish faces), not rough "bricks" at all!

Which makes a lot of sense: who builds a castle out of bricks? But you'd be forgiven for thinking Squaresoft were idiots who never saw a castle before, if you had never seen the game on a CRT.

Original author here; just to note: all machines that have an RGB output in real life can be used with an emulated RGB screen connection in the emulator.

The only thing that isn't going to undo is the proper aspect ratio — the emulator has no facility to force 1:1 pixel mapping. There is some filtering to help avoid too much ugliness with that, and I think that's entirely correct in a world where pixel density is once again on the rise, but it isn't a panacea.

The 68000 Mac's default chequerboard desktop is the absolute worst case for this emulator, which is a shame because it's that machine's default.

But in terms of detriment versus detriment — incorrectly proportioned output versus output that doesn't look good until you have a high-resolution display — I'm hopeful that I'm on the right side of history.

Of course they look better, but the simulated CRT images show how the game was supposed to look like back at the time it was created.

I disagree that they look better. They certainly look sharper but that's not necessarily what you want.

I grew up playing Atari, Nintendo, and Commodore 64 connected to battered old 13" and 19" analog TVs via an RF modulator. Sometimes even a black and white TV. The CRT glow and pixel-adjacent color fringes and distortions were part of the experience and take one back in time. There's just something magical about it for those who grew up with that.

But CRTs give a much richer, warmer picture!

Seriously, CRTs are the vinyl records of video games, with all the same confusion of aesthetics with fidelity due in part to nostalgia and in part to an entire industry calibrating itself to the flaws of the medium for decades.

I mean, modern displays are better in a lot of ways, but most CRTs have at most a couple gate delays between the console output and the electron beam.

Plus, the games were (generally) designed to run in an environment with all the warts of a CRT, sometimes the lack of pixel boundaries is important.

For some games, yes. The graphics in others were designed for CRTs, though – take Sonic.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact