Hacker News new | past | comments | ask | show | jobs | submit | qbane's comments login

That is why emulating, when targeting 100% accuracy, is a craftsmanship in our industry. Not only do you need to know each and every quirk the original hardware/software has, but you also need to replicate it, however peculiar it is. Consider the potential performance impact if itself is not challenging enough.

Emulators have to be pragmatic about accuracy, when emulating more modern systems it's generally not feasible to target 100% hardware accuracy and usable performance, so they tend to accept compromises which are technically deviations from the real hardware but usually don't make any observable difference in practice. Anything that uses a JIT recompiler is never going to be perfectly cycle-accurate to the original hardware but it usually doesn't matter unless the game code is deliberately constructed to break emulators.

Dolphin had to reckon with that balance when a few commercial Wii games included such anti-emulator code, which abused details of the real Wii CPUs cache behavior. Technically they could have emulated the real CPU cache to make those games work seamlessly, but the performance overhead (likely a 10x slowdown) would make them unplayable, so they hacked around it instead.

https://dolphin-emu.org/blog/2017/02/01/dolphin-progress-rep...


I once wrote something that would hard lock cortex-A8 but not the cortex-A9 we shipped on. To my knowledge, nobody tracked down why our app, once exfiltrated from our device, would crash slightly older phones.

Were you exploiting an A8 erratum, or detecting "this is an A8" somehow and then making it barf in a less processor specific way?

A8 erratum. This was ages ago, but if I recall you could place a thumb2 instruction straddling two pages, only one of which was loaded in the TLBs. If you got everything right, the A8 would hang without trapping.

Edit: it was errata #657417, long since scrubbed from arm.com


The A8 errata doc is at https://developer.arm.com/documentation/prdc008070/latest/ these days and does have a description of 657417 with enough detail to make writing a reproducer possible. Instructions crossing page boundaries are tricky beasts :-)

Well look at that! I had searched Google and arm.com for that number. This was definitely it.

So they'll patch around it.

You're just making your software worthless in the long run for some value probably less than 5 years, or creating a fun problem for an emu hacker.

Most of the significant losses to piracy monetarily isn't emulation, it's the chippers/mods that bypass cloned media copy protection.

Which emulator authors have a lot more control over in bypassing


You assume an anti-piracy attempt when GP, from my reading, made no such statement. More of a mystery, but who cares because the problem hardware wasn’t what they shipped on.

They used the word exfiltrate, it's not a stretch.

If it hardlocked an A8 but not an A9, chances are very high that an emulator would run it with no problem, because nobody deliberately tries to emulate the kind of CPU bug that lets an app hardlock the CPU. GP appears to have been interested in deterring people from running their code on non-authorised real hardware at the time, not targeting emulator users.

Bingo! Didn't want someone running new product's app on old product's hardware. Company was new to building non-RTOS devices which were tightly hardware bound, wanted similar type restrictions.

>but it usually doesn't matter

When it comes to speedrunning: Some speedrunners do, though, to ensure their speedrun tech are reproducible on both emulators and real hardware.


That's true, the small differences between a pragmatic "accurate enough" emulator and real hardware can matter for speedrunners. The difference between real hardware running at 60fps and a principled cycle-accurate emulator running at <0.1fps would matter more, though.

For the SNES and earlier it's feasible to have exceptional accuracy and still usable performance, but for anything modern it's just not happening. Imagine trying to write a cycle-accurate emulator core for a modern CPU with instruction re-ordering, branch prediction, prefetching, asynchronous memory, etc, nevermind making it go fast.


>For the SNES and earlier

I think the cutline can be moved to the original PlayStation now.

>but for anything modern it's just not happening.

Which arguably explains a cultural rift in arcade emulation circles. MAME's philosophy is about cycle-accuracy, which might work for bespoke arcade hardware up to early 3D systems, whether they're bespoke (such as Namco's System 22) or console-derived (Namco's System 1x series, which all derive from the original PlayStation hardware) hardware. For newer arcade titles, which are just beefed period PCs, such kind of emulation (philosophy) would not be suffice for gameplay.


> Anything that uses a JIT recompiler is never going to be perfectly cycle-accurate to the original hardware

beebjit [1] is a cycle-accurate JIT-based emulator for the BBC Micro. It can be done.

[1]: https://github.com/scarybeasts/beebjit


That is not perfectly cycle-accurate, but it is accurate enough to run almost anything without issues.

The good news is that modern systems are so unpredictable relative to each other that games can be relied upon to not require cycle-accuracy. IIRC the cycle timings can differ between different units of the same model.

I wonder how mainframe emulators (that sometimes are used to run legacy, very critical software on modern hardware) manage to do it. Do they go for full complete emulation? As in, implementing the entire hardware in software?

Most of those a JIT recompilers, Mainframe code doesn't usually depend on instruction cycle timing to the level that say beam-racing game code does.

Mainframes typically execute batch processes on a CPU. Much simpler than a game console with a GPU. Cycle-accurate emulation is less relevant for mainframes.

I remembered that taking care of metadata of 1000+ mp3 music and syncing them between music players and backing up with CD-RWs were time filler. They still are, but I enjoyed doing so. Digital garden in web 1.0 era I could say.


I was so proud of my meticulously tagged mp3 collection, and even took the time to add album art to everything. I always wanted mp3s tagged with the original album they came from, even if they were from a greatest hits CD or something. (Looking back, this wasn't quite the right mindset, as sometimes the versions on a greatest hits CD or similar will be slightly different than the "real" album version, but it was my collection!)


I maintain mine. It's the only way to get guaranteed gapless playback in the modern era.



I kinda agree that `new URL()` need not bail out when the URL is invalid. Both practices exist in the spec: `new Date('foo')` returns invalid date, `parseInt('foo')` returns NaN, while `new Array(-1)` throws a `RangeError`. Probably there is a need of URL instances for invalid ones? Then we come back at an Either<x, y> return type.

However, it is the `try...catch` pattern that messes up with the `const`, not the URL constructor. It is very annoying every time when I have to wrap an existing block with a try...catch, and inevitably lose the const-ness to some variables, unless I wrap everything again into a function and `return` in the try block if things go normally.


Re: try-catch x const, this is indeed one of top annoyances in JS those days. I hope this proposal makes it to the language one day:

https://github.com/nzakas/proposal-write-once-const?tab=read...


Thanks for the reference. I am surprised that Java already has a solution and the choice of keyword "final" is a nice one.


I often find myself using an immediately invoked function expression to handle the try catch const assignment issue, for example:

  const result = (() => {
    try {
      return new URL(someString)
    } catch {
      return null
    }
  })()
It's not pretty though, and I think the do expressions proposal would rid my codebases of this pattern: https://github.com/tc39/proposal-do-expressions


Thanks for this, I think your syntax is an improvement over the solutions I've tried


try/catch messing with const is a well known problem in modern JS & one that imo needs a generalised solution & not one specific to one constructor.

As far as other error return patterns existing in "the spec" - there are multiple specs referred to here (ECMA vs DOM) & also multiple eras; there seems to be consensus that some past patterns on error return weren't great decisions & some well considered consensus to move to more broadly accepted known patterns (like try/catch).

e.g. for your Date example, that's a really old API that ECMA are working on replacing with Temporal (which throws)


My guess is that the reality is "98% of websites are valid UTF-8 documents". A large portions contain only ASCIIs so they happen to be, just indistinguishable with truly UTF8 encoded ones until they break.


This project, along with TernJS, really inspired me to dig into the wild world of type inferencing of JavaScript with minimal annotation. TypeScript, or even JS with JSDoc comes with learning overhead. Pure JavaScript projects often cannot be easily typed in TypeScript's way without major refactoring, and sometimes a lightweight editor plugin suffices to provide enough insight for ease of development. There is no one-size-fit-all solution, and I wonder how far can something like context-aware autocompletion can go without TypeScript.


IMHO type checking for dynamically typed languages should be done by the IDE instead of messing up the language itself. Fortunately at some point there will be an addon for your favorite IDE that does the type checking without touching your code. Microsoft could have developed such a rigorous type check addon for VSCode, but (unfortunately) instead they went for Typescript, no clue why they preferred to do the latter, maybe another throw to dominate the no1 browser language?


JavaScript is inherently untyped. Either the programmer tries to be very careful to make it type safe, or someone invents a programming language that can only compile to a sound subset of it. TS heads for the latter way. But things quickly go out of control when you introduce dependencies, and even a dependency without interface file can pollute your codebase with "any". I do not think TS would be mainstream if there were not many packages on npm that had included a type definition. On the other hand, a type checker/inferencer that "does less things" like Jedi has proved great success in Python community.


I don't understand this point of view: types are part of the code? Perhaps one could eliminate 90% of the need to write them down with aggressive use of Hindley-Milner, but in many places they're a deliberate choice.


The discussion taking place is about type checking plain Javascript. The comment you replied to questions why Microsoft didn't go down that road, something they ended up doing eventually anyway, instead of introducing a new language.


SQLite does allow one to keep the entire database in memory: https://www.sqlite.org/inmemorydb.html


But is still orders of magnitude slower than a hash-map.


If you want ordering, then a hash-map doesn't help you.


Still orders of magnitude slower than a B-Tree Map.


I don't think that any random image format can really replace PNG for web in terms of browser support...


I tend to imagine the clock wraparound when the shortest needle points to the top. Initutively this is doable by speeding up that needle by a factor of two. But of course the there would be another set of ticks.


I think SFTP is a good but underrated protocol, when mirroring a file tree bidirectionally makes more sense than cloning one to another. Having forked and studied from SSHFS' code, I am currently maintaining a list of resources and some personal thoughts on https://hackmd.io/@q/sftp-over-ws.


SFTP is a major step up from FTP, but there's a lot of unrealized potential on the server side, so you can't just work on better clients. Both OpenSSH and the GNU lsh server only offer an old version of the protocol -- v2, I think. That oldness is intentional: https://marc.info/?l=openssh-unix-dev&m=168488976013498&w=2


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: