Hacker News new | comments | show | ask | jobs | submit login
What’s up with the mysterious inc bp in prologues of 16-bit code? (2011) (microsoft.com)
66 points by userbinator 35 days ago | hide | past | web | favorite | 28 comments



I stumbled across this while searching for something very similar, and found it very interesting that 16-bit Windows was, while most known for being cooperatively multitasked, also had what could be called cooperative virtual memory.


Or 'automatic overlays', depending on the angle of anachronism you want to take.


The terminology of the time talked about "discardable segments" of course.


Please elaborate?


It's in the article. Windows could unload code segments that weren't running to free space for other purposes. But those functions might be on a process call stack. So it used the otherwise irrelevant low bit of the BP register to store whether a frame was a near or far call, thus allowing the unload code to fix up the return address to allow for reloading on demand.


So if I get this right: the stack is limited in size, and certain "frequent use" routines were labeled as "near" while others with less frequent use were labeled as "far" and there was a dynamic way to keep frequent ones in memory?


It's a 16 bit architecture. Any code that doesn't fit in 64k needs to be in a separate segment. In practice, the way typical C compilers managed this (there were a bunch of code generation options) was to put all the code in a single C/obj file into a separate segment. Calls within the segment were "near" and used the simple CALL/RET variants which took and pushed a 16 bit address. Others were "far" and called/returned to a tuple of an address and a segment descriptor.


God I’m glad that era is over.


From 2011


The author's book, Old New Thing, is worth a read, even though it was written years ago.


The incredible ingenuity of those old-skool engineers. Can you imagine modern-day TDD nodejs “ninjas” trying to solve this problem under these conditions?


In some ways the complexities of the NodeJS stack are harder to work with than what we used to have to deal with. Constraints exist no matter what level of the stack you are working in. I imagine most of those "ninjas" would do just fine under those conditions since they would have taken just as much time to learn and understand the constraints of the system they were working in as they took to learn the constraints of the nodejs ecosystem.


the NodeJS stack are harder to work with

These are the devs who need to download a library to pad a string remember


Yes. Do you actually understand the constraints that led to them needing to do something like that? It was deliberate and a result of the constraints of the NodeJS ecosystem and language. Oversimplifying the issue as a form of virtue signaling is unbecoming.


A competent programmer wouldn’t need a third party library to pad a string even in ASM, let alone an Internet connection. And everyone knows it. So enlighten us all, rockstar, why do you?


I don't even use Javascript and I'm pretty far from a rockstar but I frequently make a function for simple operations like padding a string. I'll also frequently use a function from the stdlib for things like padding a string. It's a sign of a mature and good programmer I think.

Node is somewhat interesting since it's stdlib is pretty much non-existent and since it's javascript it's somewhat beholden to the world of browsers which shapes it's tooling and culture. Those constraints are why there are a lot of one function libraries out in NPM, including left-pad. Your attitude however shows that you aren't actually interested in understanding the constraints of that ecosystem. What you do appear to be interested in is putting down that community in order to look cool.

HN is traditionally pretty hostile to that sort of attitude which is why you keep getting downvoted.


I think the better engineers use the MC68000 in their designs and had nothing to do with this nonsense. I started reading TFA and stopped as soon as I realized it was about a crummy abstraction that should never have happened.


There’s a saying that anybody can design a bridge that will withstand a load, but it takes an engineer to design a bridge that will just barely withstand a load.

In other words, limiting resource usage and cost is the whole point of the game. Yes, you can solve it by throwing a fancy expensive CPU at it, but that’s not necessarily good engineering.


There's also a saying that if a bridge cracks 1/3 of the way through, that is not a safety factor of 3.


Why would the 68000 improve anything? My recollection is that bus error recovery was impossible, so you'd end up having to do something similar, or worse...


And the M68000 didn't have an MMU, nor privilege management. The M68010 had support for external MMUs (and privilege management), but no built-in MMU. It wasn't until the M68020 that a built-in MMU was added. So any attempt to have virtual memory on an M68000 would have necessitated hacks not too dissimilar to the inc bp real mode Windows hack. Indeed, the Amiga OS (which also was a cooperative multi-tasking, virtual memory OS) had a segmented 64KB addressing mode precisely so as to support virtual memory even though the M68000 was properly a 32-bit CPU with 32-bit addressing. I don't know the details of how Amiga implemented virtual memory beyond that though.


Also, the 68020 did not have an MMU on-board. You still required a separate MMU chip, such as the 68551. See https://en.wikipedia.org/wiki/Motorola_68851

The 68030 and above did come with MMUs, except for low-cost versions of the chip.

Why do I remember this stuff? I was a big Motorola fan back in the 90's (Amiga especially)


Ah yeah, I forgot about the 68851. And yeah, the Amiga was amazing.


Amiga OS was actually preemptive multitasking. It did not support virtual memory. See https://en.wikipedia.org/wiki/Exec_(Amiga)

You may be thinking of the original MacOS.


Not demand paging, but I thought it loaded and unloaded 64KB segments as needed, much like Windows 1.0.


Are you thinking of "overlays"? AmigaOS did support loading and unloading of code segments, but this may be a bit different than what you're thinking of. It had to be done manually by the calling application. See https://en.wikipedia.org/wiki/Amiga_Hunk#Overlaid_executable...

I did quite a bit of Amiga C in the early 90's, late 80's and resource management, including memory and files, was all quite manual. Maybe in 4.0 it got something better? I stopped working with the platform by then...


Thanks for the info!


The original Macintosh stored executable code in "code resources" that were a maximum of 32K each, as I recall. You could not only have more than 32K total, you could have more than your total RAM. If you were ambitious you could manage what was in memory for efficiency.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: