The license needs work. It claims to be GPL3 but then includes terms which completely violate GPL3, and GPL is not really applicable to hardware in any event.
It's really some form of CC-BY-SA-NC plus some more limits about "safety" which is impossible to define and prove or disprove and none of the creators business.
Basically it's almost untouchable until the terms are actually defined and made sensible.
Even the simple "only for hobby/personal/educational use" is internally inconsistent because education is itself a commercial activity.
Trying to say too much in the license is just wasting an exceptionally cool project. Just make it CC-BY-SA, add the warnings and disclaimers, and leave it at that. Only add the -NC if you want to sell them and be the only one allowed to sell them. If you aren't planning to sell them as an important part of your own livlihood, then don't add -NC, it doesn't make the world a better place.
this license stuff is endemic in vintage computing, very frustrating
people (usually with a limited engineering background) have a fantasy that they might start a hobby business, and are afraid someone will "steal" their design and sell a dozen on ebay
I'm not a lawyer, but what about those ubiquitous statements about how things are "provided without warranty"? It's clear that this guy is concerned about being liable for what could go wrong. Also, I'm not sure if laws and expectations are different in Germany where he seems to be based. Edit: sorry, his domain is .NL, but I thought otherwise because it seems a lot of people he cites working with in the readme have a .de domain.
The items in the readme all read as warnings and disclaimers, not as enforceable terms of the license agreement. It's basically saying everything you do with this is solely at your own risk, and if you use it in production, any liability is on you.
The attribution request is a "should", not a "must", and not part of the license.
Personally, I wouldn't interpret anything in the readme as binding conditions.
The readme stipulates several things which are allowed and not allowed. They are not disclaimers nor advice as worded when I read them. You can't ignore any stated terms and conditions you don't like. The terms an conditions apply no matter where they are written. The readme is no different than any other file, like for example comments in a random source file.
I did not say anything about attribution.
GPL3 already requires attribution and is not a problem.
Similarly my suggested CC-BY-SA includes BY.
> The readme stipulates several things which are allowed and not allowed.
I mean, not really -- it's mostly just a big disclaimer of liability, and the only time it uses the phrase "not permitted", it's also just amounts to another disclaimer of liability in that it's saying that it's "not permitted" to use the product without assuming full responsibility for the associated risks.
I miss the ISA bus and its simplicity. People raved about PCI's Plug and Play, but in practice I found it very straightforward to set IRQ jumpers and the experience was free of the quirky issues I encountered with PnP (especially in the early days).
I recall wiring an LED display and some very simple logic (like a buffer IC or something enabled by an address line in a non-existent memory segment) directly to an ISA wire wrap card and getting it to work on the first try. One of the reasons I love working with microcontrollers (especially the relatively clean 8-bit architectures like AVR) is they lack so many layers of abstractions.
One major aspect of PCI which I do respect is it's incredible backwards compatibility. I still use a 20 year old Adaptec PCI SCSI card via an adapter carrier in in my latest PC (drivers were fun but it works).
IRQ jumpers weren't too bad, but it was easy to run out of them, and not all devices could share interrupts well. PCI made IRQ sharing more accessible, but the real improvement was (in band) Message Signalled Interrupts, which allows for a much larger number of interrupts, ending sharing in most cases and also resulting in a knowable ordering between I/O and interrupts, which can eliminate the race condition between interrupt arrival and status reads/interrupt enable.
The other big pains of ISA were finding working I/O assignments with lots of cards, and finding DMA channels (although ISA devices started doing their own bus mastering DMA because the motherboard DMA controllers were a fixed, slow, speed, by spec)
Gather round the CRT kids it's another story time from old man delni...
Back in the before times, Papa Delni setup an old 486 (actually 386 using a cyrix "486" cpu) as an X terminal. This was a hot setup with an ATI Graphics Wonder, AMD Lance network, 8mb of ram, the whole 9 yards.
Everything worked great except the network was terribly slow. Except when I moved the mouse, then it was nice and normal.
Turned out the network card and mouse were on the same IRQ, or maybe the driver was looking at the wrong IRQ, and the network buffers only got drained when the mouse was wiggled.
I actually used that little nugget of knowing things to know to switch to polled network buffers on a big fat firewall that was running freebsd -- it was constantly under huge load so there was no real advantage to waiting for IRQs.
Oh yes `HZ=4000`, `net.link.ifqmaxlen=200`, `net.isr.dispatch=…`. I don't miss running out of hardware resources that would be a few 100 (or 1000) gates to get right like a nesting, vectored interrupt controller with enough sources and priorities.
IRQ sharing on PCI came with its own pains though which were sometimes harder to diagnose and fix. In the early 2000's I was recording and mixing music with DAWs on Windows 2000 and you always had to be very careful that your PCI ASIO sound card did not share resources with anything else or you got to deal with dropouts and glitches. Plenty of futzing around in BIOS setup and Device Manager, very flaky stuff.
Definitely. Sharing was better than not being able to use your slots when you ran out of IRQs, but sharing wasn't great. The idea of wiring 4 IRQs to the slot and let the card/driver pick was good, but the reality wasn't great if your motherboard just tied all 4 to the same pin on the interrupt controller.
So often in computing we've completely overhauled the stack with loads of new complexity, when simply adding more of some limited but unexpectedly popular resource would have sufficed (great example... IPv6).
(Ps. Thanks for your informative reply to my comment)
Add a little more to this number here, and a little more to that number there, piecemeal and conservatively, and before too long you have more complexity than if you just designed for larger numbers in the first place.
How do you propose we add more IPv4? Not to mention that the IPv6 protocols are less complex than IPv4 - as they were designed after the "natural" evolution of IPv4 concluded.
One could have made an overlay network on top of IPv4, where applications (and/or operating networking stacks) could have been aware of how to manipulate the data. Majorly more kludgy, but feasible. A number of real IPv4 addresses would have been allocated to serve as gateways/proxies to access the new IPv666 network.
It was also because a lot of HW (soundcards mostly) had a morbid preferrence for IRQ already allocated to the parallel (or serial) port.
Sometimes one could fix this in BIOS, sometimes not.
If you want to do something like connect a LED driver today, give SMBUS a try! It's basically I2C bus with some limitations, and a lot of motherboards bring it out right via convenient headers.
In general, I2C bus is a great way to interface with computers: it's needs only 2 wires, can be easily connected even to 8-bit MCU, and it has great development experience: start with $10 USB-I2C bridge (like CP2112) for development, then connect final device via internal SMBus connector or via unused monitor port. Linux comes with all the drivers, so there is no need to write kernel code.
(You can also do USB, but this is annoying to implement from scratch, without any libraries. And USB-Serial bridge might be taken over by ModemManager and various embedded IDEs.)
My first machine was a 286, so they have a special place in my heart. It should be pointed out that the clock speed of an ATMega328P (original Arduino / Arduino nano) is nearly exactly the same as an 80286. Two of the same could easily outcompute a 286! You can buy them for ~$3 at the moment...
I've put plenty of AVRs on ethernet - and once, a 386, using an ancient DOS TCP/IP stack, but never a 286. Linux doesn't support it due to requiring an FPU.
It'd be fun to design the electronics for an old-style computer, but IMHO it'd be far more amusing to do it from scratch with an original architecture than try recreating a third party motherboard using a similar layout scheme. Something with parallelism and auto-scaling would be quaint.
It's the lack of a "modern" paged memory management unit that's the main issue for mainline Linux on a 286 AFAIK. Nobody wants to deal with x86 segmented memory, having a nice flat addressing paged mode is one of the big wins of the 386. Although I believe Linux dropped support for 386 (and 486?) a while back so they could remove some special cases for such old chips.
Yes, paging is what is necessary to run Linux. The main problem it solves is how to allocate physical memory without having to constantly move variably-sized segments around to get rid of fragmentation.
But I'm kind of sad that all OSes converged on a flat address space for the sake of ~PoRtAbIlItY~. Monocultures are bad, and we currently have one, where everything is basically a UNIX clone written in C.
32-bit x86 allowed to use both segments (of any length up to 4 GiB) and paging at the same time. Before the NX bit was introduced, this was the way to have write-xor-execute permissions. And some security features of segmentation can't be replicated at all in a flat address space: with a separate stack segment (or one with the same flat base address but a different limit), it will be impossible to add the wrong offset to a pointer to stack and get access to data that is outside of it.
IMHO, the iAPX 432 had some good ideas, and x86 should have evolved in that direction, adding a few more "extra" segment registers which code can use freely as pointers to isolated objects. Each would have two "limit" fields (for negative and positive offsets from the base), with one of these spaces used for storing pointers to other objects, which can only be manipulated through CPU microcode that guarantees memory safety.
Instead they eliminated segments completely in 64-bit mode, except for FS/GS serving as an extra offset that gets added with no limit checking whatsoever.
I do agree with what you say and also have nice memories of 8086 segmentation. I find it funny that we are continuously forced to add workaround on top of workarounds to the flat address space in order to avoid accidental memory errors. Segmentation had all that decades ago, and easier debugability to begin with. But it is clear to me that we are moving again towards something like that, even if enforced at a different level.
I did not mind programming the 16-bit x86 model in assembly. You could do a lot of things such as use segment register pointers (16 byte aligned) for "large" data structures which could themselves be addressed with ordinary 16 bit pointers and chained together to make "extra-large" structures.
Compilers like Turbo Pascal and Microsoft C all gave you a choice of which memory model you wanted to use, often you wrote programs where a 64k code and 64k data space is all you need.
Turbo Pascal actually didn't: IIRC versions 3.x and lower used the "small" model (they were also available for CP/M on Z80, where 64K was the entire address space anyway), then newer ones exclusively the "large" one.
TP forcing use of large pointers for everything, and its lack of even trivial peephole optimizations, probably contributed to the myth that C was somehow inherently more efficient.
[BT]P7 includes for compatibility: tiny (.COM), small, compact, medium, large, and large w/ overlays (which can use EMS when available). There are DPMI clients for [BT]P7 that make it possible to switch to protected mode and use more memory.
It definitely generated offset-only pointers in the tiny, small, and medium memory models because that's how they referred to data with the common DS segment. I did plenty of memory and instruction-level debugging in [BT] Profiler/Debugger and inspected plenty of pointers where they lived in RAM.
Correction: Protected user-supervisor mode separation, and virtual memory with page faulting are what are essential. NX helps enforce W^X but it's not a deal-breaker functionally even if it's very useful.
Who cares about address space organization? (k|)ASLR, PIC & PIE means this is a detail not worth your time. This kind of standardization is vital for security, debugging, and profiling.
The 386 has LDT that generally isn't used, and there's only one TS that is updated rather than multiple ones. FS and GS tend to be used for thread-local storage.
It did, but it was honestly quite limited in 16-bit mode, not only due to the (intentionally) limited scope of the OS but because the 16-bit addressing only allowed for a 64k code segment and a 64k data segment per process.
UNIX, designed and developed on the PDP-11, had similar per-process instruction/data space limitations until V6 and V7 were ported to 32-bit minis ca. 1977 and 1979, and was further constrained by the PDP-11's limited physical address space: 18-bit 1970–75, 22-bit 1975–, so a quarter of the 8086's and 80286's 20- and 24-bit address spaces, respectively.
As an aside, this reminds me of an amusing early example of a rough-and-ready configure-style script[1] included in the BSD source for the compress(1) utility, used to limit the maximum supported LZW code length based on an estimate of memory that will be available to the process[2].
286 protected mode supports paging, but it can't revert to real mode and it only supports 16 MiB of RAM. And 286 PE is really buggy and limited. It's pretty crap.
386 support was dropped, 486 support has not yet been dropped as of writing.
286 doesn't support paging, only segmentation. I have read the manuals extensively, and also discovered what may be the last undocumented feature, almost 40 years after the chip being released. [https://rep-lodsb.mataroa.blog/blog/intel-286-secrets-ice-mo...]
Not saying this to brag, just to establish that I probably know more about that chip than the average poster. And you're also posting plainly wrong claims elsewhere in this thread. I don't get why you're doing this?
> Linux doesn't support it due to requiring an FPU.
I have used Linux on a 386SX which did not have any FPU. Linux had x87 floating point emulation for a long time (AFAIK, that emulation has since been removed, together with the rest of the support for very old CPUs).
The main reasons Linux doesn't support the 80286 are AFAIK that the 80286 is a 16-bit CPU, and Linux (other than the short-lived ELKS fork) has never supported 16-bit CPUs; and that the 80286 doesn't have paging (it has only segmentation), which would require running it as a "no-MMU" CPU.
I've put plenty of AVRs on ethernet - and once, a 386, using an ancient DOS TCP/IP stack, but never a 286. Linux doesn't support it due to requiring an FPU.
mTCP (http://brutmanlabs.org/mTCP/) is a currently-maintained TCP/IP stack for MS-DOS that will run on any PC, 8088 and up. You also need a packet driver for your NIC. 286 machines are easy; nearly any 16-bit ISA NIC will work. XT-class machines are slightly trickier because not all NICs will work in 8-bit ISA slots and some packet drivers use 186/286 instructions.
> Linux doesn't support it due to requiring an FPU.
This is incorrect.
FPU means "floating point unit". Linux does not need this. Few OSes do at all.
You meant "MMU", meaning memory management unit. This is what Linux uses to perform virtual memory handling, and that is why Linux can't run on an 80286: the '286 had no MMU (or FPU).
The difference being that 99% of computers without an FPU could just have one added. This was never true of an MMU.
Indeed some early Unix machines based on CPUs with no MMU used an entire 2nd CPU just for MMU duties.
Incorrect. Clock speed is not a 1:1 correlation with cycle efficiency. There are plenty of "slow" single-cycle architectures with very good cycle efficiency, and there are many architectures like the older P4 that have very poor cycle efficiency to chase the MHz wars.
Linux runs fine without an FPU, it even has a software emulation.
It’s the hardware task switching features that the 386 introduced that it requires (and the reason why Linus got one in the first place back in the day)
Hardware task switching was introduced on the 80286. And no modern OS - including Linux, except for the earliest versions - uses it, because the way it is implemented, the mechanism will do a lot of unnecessary duplicate work (preserving "garbage" register contents from just before the kernel is about to return to user mode, and will consequently have to restore them itself).
The only situation where it would be required to use this misfeature is for handling exceptions like stack overflow in the case that they can occur in kernel mode. A "Task State Segment" is really more like a pre-allocated stack frame anyway, the CPU puts active ones in a linked list and does not allow freely switching between them, only returning to the outer level.
x86-64 does not have hardware task switching anymore, instead it is now possible to switch to another stack on an exception or interrupt, which is all that the TSS was used for in practice anyway.
Where am I disagreeing with that? Hardware task switching already existed on the 16-bit 80286, and remained supported on newer generations until being finally removed in x86-64. Early versions of Linux did use it for a time, but doing it in software turns out to be faster, because the mechanism was badly designed for what a general-purpose kernel has to do.
It's really about engineering trade-offs and cost. It used to be the cost of extra logic to allow for devices to work out resource allocation would be much higher than the cost of a set of pin-headers and jumpers. Now the cost of those extra pin-headers and jumpers dwarf the cost of the extra logic.
> One major aspect of PCI which I do respect is it's incredible backwards compatibility. I still use a 20 year old Adaptec PCI SCSI card via an adapter carrier in in my latest PC (drivers were fun but it works).
Motherboards with PCI slots are becoming rarer and rarer though. Most modern boards only seem to have PCI Express.
I remember using early plug and play devices- often I had two devices that both worked, but not if installed in the same machine at the same time, and I would have to shut down the machine and swap cards to use the other device. I am still not sure exactly that the problem was that caused that.
I don't miss it because the resources and choices were extremely limited. PnP (ISA, VLB, AGP, PCI/-X, and to some degree MCA and EISA) was revolutionary because it automated the mundane.
What I remember is clones of the IBM AT taking over because they delivered such unprecedented performance for the price. In the "8-bit" age of Apple and Commodore we looked up to minicomputers like the PDP-11 and Decsystem 20 but AT clones could surpass them and soon the 386 would surpass the VAX although it would take most of a decade for 32 bit OS to become mainstream.
When I was in high school I developed some software for a teacher on my PC, was a little shocked to find she was still running CP/M. It was no problem at all because I could emulate a Z-80 on my AT clone at 3x speed!
People forget that the jump from "no computer" to "any computer" is so huge as to be unprecedented.
It was very common to find ancient computers just being used for work and word-processing, etc, up to and into the Internet era.
The Internet was what mainly pushed those aside; once that really took off it became a question of "can you connect" and if you couldn't, an upgrade started looking really good.
I'd be fascinated to have a breakdown of what all the ICs are on the motherboard. Like are they mostly ASICs, or mostly discrete logic, like level shifters, gate drivers, shift registers and the like?
Either way, it's amazing to think that basically every non-power related chip on that board is now inside a modern chipset IC.
Most of it is somewhat more high-level than the 7400-series ICs and such you might find in something older like the early 6502-based microcomputer motherboards.
Not sure about the rest, but I thought it was funny that the 80286 CPU is among the smallest chips on the board - that's probably due to the PLCC (?) packaging, while most of the others are DIP, but still...
The only other PLCC-packaged chips are two ATF1508AS, which are indeed ASICs (CPLDs) - oh, and the 53C400, which is... a SCSI controller (probably added as a bonus?!).
>I thought it was funny that the 80286 CPU is among the smallest chips on the board - that's probably due to the PLCC (?) packaging, while most of the others are DIP
That's basically it. The 286 has a much larger wafer inside than a typical DIP, and DIPs are extremely inefficient packages.
It would be interesting to see how much you could shrink this board down if all these chips were replaced with BGAs (using the same wafers inside). Obviously, that's not really feasible (no one is selling a BGA version of an 8259 interrupt controller, much less an 80286), but it would be interesting to see the dramatic size reduction, just from packaging waste. Also interesting would be if you designed a board using BGA versions of all these chips, but fabbed on modern IC processes (thus yielding much smaller wafers. and therefore somewhat smaller BGA packages). Of course, the whole thing could probably just be implemented with a single FPGA these days.
Interesting side point: I know for a fact of at least one nuclear power station where the control systems are all 286s. They spend a fortune getting replacement parts. Sometimes nostalgia for some retro computing has a potentially useful, practical customer use case.
AFAIK they are required to use very specific certified parts. So while they could in theory use motherboard replacements cleanly I believe that's prohibited because they haven't undergone certification. That said... I think this is very valid and a company that made extremely high quality parts for this purpose could serve both. But, it would mean getting a modern board through FCC and cert and that is an extremely non-trivial process IIRC. If I was going to go down that route I'd reach out to the utilities to see if they will fund it.
I'm not so keen on putting the RAM on the ISA bus like that; assuming that the ISA bus is standard and thus 8MHz, it's going to slow the system down unnecessarily.
That's not going bring much joy. There are ISA cards which expect not (much) more than a 8MHz clock (bus speed); e.g. the (at one time popular) Adaptek 1542B SCSI HBA will not work reliably with 10MHz.
5170 (IBM AT) was a 286 design. The 5150 (IBM PC) and 5160 (IBM PC XT) were the previous 8088-based systems. Sounds like it's described as "based on" because it's heavily reverse engineered, modified to use more readily available components where possible, and then improved to 20MHz capability over the original 5170's 6/8MHz.
>Sounds like it's described as "based on" because it's heavily reverse engineered, modified to use more readily available components where possible, and then improved to 20MHz capability over the original 5170's 6/8MHz.
I wonder if he looked at any clone motherboards from that time, or a few years later? 16MHz and 20MHz 286s were quite common before the 386 took over, and they probably had to make some changes too (and came a few years after the PC AT, and so probably had a lot of improvements; the AT came out in 1984, but the clone 286s were still pretty strong in the late 80s).
I was thinking 286's were faster than was being mentioned, but I suppose his design was speeding up a design from the beginning of the era instead of copying a later design.
286s were quite fast for DOS applications in their later days; they were actually faster than the newer 386SX systems that were competing against them, but of course they couldn't do 32-bit operations.
I am the designer of this project, and would like to thank magicalhippo who originally posted here about my work, I appreciate that my work can reach more people in that hopefully some might enjoy seeing and possibly even building it.
This reply is for people here interested in the design, who feel a certain nostalgia because I noticed there were some questions which I feel I would like to comment on to make a few things more clear, for interested users here.
I started out doing Z80 stuff since the 90s, and did a Z80 PC mainboard. It got me interested in also designing an XT system to learn from this and possibly it could benefit a later revision Z80 system because that was my favorite CPU. I did that until I found that the XT design was sufficiently user friendly and complete. The original XT schematics were relatively complete in terms of the schematics. I have seen some bare PCBs of my design being sold of my XT on a Russian website, which I was happy to see because I hope that anyone enthousiastic, feeling nostalgic about the XT machine and able to do so could have the chance to also build one. Usually you get 5 pieces from for example JLCPCB so there will be some excess boards left over. All the mainboards are not easy and require some skill level and understanding in order to be able to debug any timing related issues. It could be interesting for students for a school project.
Next I decided to look at the AT which is a completely different machine, and yet shared many similarities. The big appeal of the AT to me was the fact that the IBM team involved had the great foresight to make the 16 bit AT completely hardware and software compatible to the 8 bit XT machine. When you study the complete design, you can find that this is not a trivial matter and goes deeply into the core of the technology in order to be able to really have true backward compatibility. In doing this, the PC/AT paved the way for the industry standard to successfully evolve from it. The 16 bit 286 is limited but it deserves respect and served its role in giving humanity the PC technology. The clone builders then took off with the standard which has in it's core kept the original 5170 functionality inside it. It's pretty amazing for this machine to have survive in so many iterations of PC technology! To me this is a level of legendary invention by IBM and particularly thanks to Don Estridge.
The 5170 is a severely limited machine because it consists of a lot of TTL circuits which are vulnerable to timing problems and difficult to scale up in that configuration. The 5170 and similar TTL chipset boards are vulnerable and error prone in my experience, I tested 4 of these old boards and they all had some issues. In order to get the functionality I needed, I had to say goodbye to the TTL graveyard approach and venture into programmable logic. Initially I didn't want to do this because it's not open, but when I recreated more circuits hidden in the PAL chip U87, the scale of the whole design started to grow beyond what I am comfortable with integrating into a single mainboard, and being able to include some useful integrated I/O. The design files are shared so technically it's also openly known technology just in a programmable form. The CPLD chips are a little large, however they have the advantage of through hole sockets being possible. This makes the design more accessible for people who don't like SMD. There is a LAN chip on the board but it should be left out, my tests were not great, it seems to cause stability issues in the system so I took the chip off again. My design does support the 80287 math coprocessor, I have tested this. Please keep in mind, this was literally my first prototype PCB concept of an AT design, and took me more than a year to develop. Before my work, the AT design was only partially known openly to the world. You can see some MAME files and some previous failed attempts to reverse engineer the U87 PAL. A big and important part was hidden inside PAL chip U87, which is now openly known in a functional schematic finally. Without knowing the logic inside U87, there is no chance to fully know the system and how it works. After the 5170 it's mostly all chipset based AT PCs and these chipsets are not known.
After I had replicated a functional 5170, the next goal was clock speed above the original 8MHz. I progressed into 16MHz and then targeted to replace the 82284 and 82288 control chips inside the existing system controller CPLD. This required some level of rewiring finally to patch in an oscillator IC for the new clock signal. A lot of the development work I have done so far is the same as what chipset developers have done before. So I could learn and understand the issues they had faced before in order to get to higher clock speeds and reduce the design in size. Simply replicating the logic in a CPLD comes with a lot of timing related challenges and issues which I have had to overcome in my development.
Please note, the original 5170 used a S-BUS or system bus. Directly behind this bus on the mainboard is the M-BUS or memory bus. So in principle it's possible to move the M-BUS onto a card into the ISA slot which is what I have done. Also note, the ISA bus is simply a CPU/system bus and has no real clock operation. It's only limited looking at the system as a whole in terms of how fast the entire system is able to operate. And that limit is not 8MHz. It's much higher, at least 18 to 20MHz, depending on the configuration. My design was originally intended to be a recreation of the 5170, however it turned out to have much bigger potential far beyond that design thanks to the CPLD chips. My design is not DRAM based, it's SRAM based. So no CAS/RAS involved and no refresh. I removed all refresh functions which are not needed and to some extent free up the CPU for a few more useful cycles per second.
The next step I am working on now is a combined 286/486 system which can swap the CPU. The 486 function will be at some level in 32 bit so it is planned to support all the faster DX2 and DX4 CPUs which evolved. I will use BGA FPGA chips which have much higher integration capability and higher clock rating. This design is in the same spirit as the first revision prototype, to redo the development which was done in the 90s, however I am using more modern technology which was not available at the time. However it's about the technology at its core to be openly known and published finally, and not being lost in time because of original machines dying off. Doing this design can preserve the historic technology in a reproducable form, it's fun to me and also allows to explore the further limits and efficiency of these CPUs, hopefully the FPGAs used will be able to do this in previously unseen levels. I will be using some modern RAM like DDR or such and let the FPGA control it and interface to the legacy CPU. That's the idea. Imagine running a 486 with the full memory in the same speed as cache chips. I don't know if that's possible but I will attempt it. For more details, check out the VCF thread. The new 286/486 development follows in the same thread which started with my 286 PC/AT design. In GitHub there are separate projects published for each of my design iterations.
Thanks for the additional history and context, and nice work. Clearly a non-trivial undertaking.
I stumbled over it while searching for some details on the turbo button my 286 had for a subthread in this[1] story.
When the 286 finally died we upgraded to a 468, but it was one of those Cyrix clones that could be run on a 386 motherboard. It worked very well overall, except there was some features missing which caused issues with a couple of games as I recall. Will be interesting to see if you can get the 486 working on this board as well.
Thanks for pointing me to this interesting other thread as well. I replied to this just now offering my theoretical two cents to the conversation.
Regarding the 486 support in my next iteration, it will be a huge work to get this going. The 486 has no 82284 and 82288 type of equivalent IC to get the system going initially, so I will need to depend completely on reading the Intel timing diagrams in the datasheet/book and developing the logic to create my own CPU state machine and the system control mechanisms. With the 286 I did this same work in the end, however with the 486 I will need to develop this right at the beginning to even be able to execute code and have a functional system. I will probably modify an existing 486 mainboard in order to test out my own system control circuits on the CPU in order to replace the existing ones from the chipset one by one. I will create a method as I go along in the process. Basically it involves first creating a predictive CPU state machine model, verify this against the actual CPU in operation and comparing everything with the datasheet diagrams. In my VCF thread anyone can read how I am going about this project, and how I did it with the 286.
I am preparing the work leading up to integrating the 486 CPU into the system so I am not reading into this specific documentation yet. That will come when I have all the PCB designs ready and I have everything built up for testing because the preparation itself is already a lot of work. I have been warned that designing FPGA logic is even harder than using CPLDs for this type of "asynchronous" functions as in an PC/AT system which are easy to be skewed by seemingly insignificant circuit changes in any area inside the FPGA. The compiler may completely "overhaul" the programmed file for the FPGA at any time which may pose a new challenge every time to fix the problems, that can occur at any moment during the project. Anyway it was kind of newold86 at VCF to give me a heads up about this. So it will be the trick to find ways of changing the core AT design to become more "immune" to compiler changes.
Another challenge using different types of 486 will of course be the CPU voltage which is different between different types and brands of the 486 CPU. So I will need to design an interface between the FPGA and the CPU which has a variable logic voltage on the side of the level shifters connected to the CPU. Then we have the interface between the FPGA and the 5V ISA slots and cards, and with the 32 bit connection to a VGA adapter it may also pose other challenges, I am still doing the research and preparation of the various modules incrementally, so I will know more details about the actual complexity later. As far as the RAM is concerned I also will need to look at the voltages involved which type of RAM is most compatible with the FPGA of choice. A lot of work ahead.
I got some support from Luca (Retro*Tech) from Italy, who offered to help me and donated a ST 486DX4V100, which will be one test subject for the project.
I will start testing the system using a custom 286 module just to verify lots of things like the ISA slots, onboard I/O and memory interface. There will be the chance to speed-test the 286 to its limits. I talked with user sqpat who is also doing very cool work. He overclocked some 286 CPUs to above 30MHz using a TOPCAT chipset mainboard. Not only is he doing that, he created a custom cooling solution to keep the CPU from burning out, and he is developing his own port of DOOM for the 286 called RealDOOM, where he is using Real mode of the 286 because the TOPCAT chipset is able to generate a paging system within the 640KB base memory area. So he can keep software in the lowest section running while flipping the pages to load in DOOM game data. A unique and cool approach, and a kind of challenge to get this port fully optimized and I would imagine it involves a lot of code rewriting and completely new game loading configuration and mechanisms.
Anyway, thanks for the interest magicalhippo, it's much appreciated.
>There is a LAN chip on the board but it should be left out, my tests were not great, it seems to cause stability issues in the system
- reset (and COM1_CS/RTC_DS) is routed horizontally across the board on inner layer cutting ground plane in half right under Realtek and both CPLDs :o. Weak stitching top layer islands using sparse tiny high impedance Vias is not a good way of joining isolated grounds.
- only one Realtek VDD pin 17 is connected to power plane with thick via, rest are using tiny signal vias.
- one tht 100nF capacitor might not be enough to decouple Realtek, usually on ISA cards there are pairs of caps for every one of RTL8019 six VDD inputs. Personally I think 12 caps is total overkill, but there is some middle ground in between.
- Realtek differential TX output is routed along whole ISA J7 slot between pins, seemingly not a problem except it also goes between crystal pins (with interrupted ground, big no no) and over SD8-15 with no ground plane under. Once again ground plane is disrupted by signals being routed in it. This might lead to for example 20 MHz clock being coupled to upper part of data bus.
- looks like 1.8432M for UART also goes across whole board with no ground plane in places.
>In GitHub there are separate projects published for each of my design iterations.
Would it be possible to upload whole kicad project (pcb, sch)? Much easier to browse than gerbers re-imported into pcbnew.
- Whats the deal with RTC_CS from POWER_GOOD? Why additional redundant hex inverter generator for 32KHz?
- Power and reset generation seems very elaborate, 6 chips doing the work of two 7404 inverters and few RC pairs.
Am I correct in assuming those are leftovers from experimentation? Btw calling VBAT "VDD" was a funny trap, took me a good second to realize what was going on :)
I do not understand why there's an ISA memory card.
The 80286 can only address 16MB of RAM. Why not just fill the memory map as standard, then there's no need for SIMM slots or any other memory expansion?
Give me a few Upper Memory Blocks for DOS, and provision for 32MB of EMS as well, and frankly for a DOS machine there is nothing left to add in terms of memory.
How much is 48 _megabytes_ of RAM in 2024? 5¢? 10¢?
Only because you're including RAM spec'ed at 75pS access times.
If you limit the selection to 133MHz (the slowest 'max speed' - ridiculous, you can still easily purchase NEW 50nS and 70nS SRAM elsewhere, this is why I don't buy from mouser)...
Then the price falls to £5 for a 32MB (256Mbit) chip, which isn't excessively priced, but it still more than I paid for similar chips a few weeks ago.
286 doesnt want anything, CPU is ram agnostic. Project author didnt want to mess with implementing DRAM controller, not that its difficult. Here a simple 16 bit controller realized with ten 74xx chips https://amigaprj.blogspot.com/2013/05/amiga-fast-ram-expande...
Oh right, I didn’t read far enough to see there’s no DRAM support - very unusual for a 286 system (at least a PC-compatible one). That’s a big tradeoff, indeed. Thanks!
I wouldnt call it a tradeoff. Author didnt want to design memory controller, and SRAM gets him effortless 0 wait state access.
Personally Im also a fan of keeping ram oldschool. I even reverse engineered a ram card for a 386 board just last week https://github.com/raszpl/386RC-16
Neat. I happen to have a collection of 25 MHz "brain-dead" 286 processors and 287XL and 287XLT coprocessors. I'm thinking about creating a board that puts all of the glue logic, bus transceivers, support peripherals, and BIOS into an FPGA so that board would be extremely minimal. While inauthentic compared to the original 5170 AT, it would be far more flexible.
reply