Hacker News new | past | comments | ask | show | jobs | submit login
8088 Domination Post-Mortem, Part 1 (oldskool.org)
252 points by userbinator on June 20, 2014 | hide | past | favorite | 106 comments



That's absolutely insane at that clockrate. The way I would get 'animations' (for want of a better term) done is by rendering them frame-by-frame, compressing that and then playing it back at high speed. And even that was next to impossible. Decompressing video @60fps, and doing real-time dithering to increase the effective number of colours and still have time enough for 45KHz audio is totally nuts. This qualifies as art, not just software.


For me, the most interesting part is that his solution - updating only the changed parts between each frame and the previous one, and approximating the changes so that they're not (too) visually perceptible in order to satisfy a bitrate constraint - is one of the ways that modern video codecs achieve their compression.

I agree it's also amazing that apparently, the true limitations of hardware from over 30 years ago are still rather elusive... this is the complete opposite of the "throw more hardware at it" attitude towards most software problems today, but instead it's "throw more brainpower at it".


The more I progress in our domain of expertise, the more I observe we're being incredibly wasteful† all over the place. For all the expressiveness power of our platforms and languages it somehow sounds insane that time (ruby -e '100_000_000.times {}') takes four solid seconds on my 3.4GHz machine††. I know, bogoMIPS are no benchmark, this is just to exemplify that layers of abstraction, while useful (necessary even), are also harmful, the underlying question being: how much layers is too much layers?

I dream of a system redesigned from the ground up, where hardware and software components, while conceptually isolated, cooperate instead of segregating each other to layers. See how ZFS made previously segregated layers cooperate to offer a robust system, see how TRIM operates on the lowest hardware levels by notifying of filesystem events, see how OSI levels get pierced through for QoS and reliability concerns. Notice how the increase in layers and thus holistic complexity rampantly leads to more bugs, more vulnerabilities, more energy wasted. We all know the fastest code is the one that does not execute, the most robust code is the one that doesn't get written, the most secure code is the one that doesn't exist. Why do I still see redraws and paintings and flashes in 2014? Why does a determined adversary has such a statistical advantage that he is almost guaranteed toget a foothold into my system? This is completely unacceptable. For as much as we love playing with it, the whole web stack, while a significant civilization milestone, is, as a whole, a massive technological failure (the native stack barely fares better).

† I consider wasteful and bloated subtly distinct

†† not at all an attack on Ruby, just what I happen to have at hand right now


I think the underlying cause of this overabstraction is largely a result of abstraction being excessively glorified (mostly) by academics and formal CS curricula. In some ways, it's similar to the OOP overuse that has thankfully decreased somewhat recently but was extremely prevalent throughout the 90s. In software engineering, we're constantly subjected to messages like: Abstraction is good. Abstraction is powerful. Abstraction is the way to solve problems. More abstraction is better. Even in the famously acclaimed SICP lecture series [1] there is this quote:

"So in that sense computer science is like an abstract form of engineering. It's the kind of engineering where you ignore the constraints that are imposed by reality."

There is an implication that we should be building more complex software just because we can, since that is somehow "better". Efficiency is only thought of in strictly algorithmic terms, constants are ignored, and we're almost taught that thinking about efficiency should be discouraged unless absolutely necessary because it's "premature optimisation". The (rapidly coming to an end) exponential growth of hardware power made this attitude acceptable, and lower-level knowledge of hardware (or just simple things like binary/bit fields) is undervalued "because we have these layers of abstraction" - often leading to adding another layer on top just to reinvent things that could be easily accomplished at a lower level.

The fact that many of those in the demoscene who produce amazing results yet have never formally studied computer science leads me to believe that there's a certain amount of indoctrination happening, and I think to reverse this there will need to be some very massive changes within CS education. Demoscene is all about creative, pragmatic ways to solve problems by making the most of available resources, and that often leads to very simple and elegant solutions, which is something that should definitely be encouraged more in mainstream software engineering. Instead the latter seem more interested in building large, absurdly complex, baroque architectures to solve simple problems. Maybe the "every byte and clock cycle counts" attitude might not be ideal either for all problems, but not thinking at all about the amount of resources really needed to do something is worse.

> how much layers is too much layers?

Any more than is strictly necessary to perform the given task.

[1] http://www.youtube.com/watch?v=2Op3QLzMgSY#t=10m28s


"Demoscene is all about creative, pragmatic ways to solve problems by making the most of available resources"

It probably doesn't hurt that nobody expects a demo scene app to adapt to radical changes in requirements, or to interoperate with other things that are changing as well - for that matter, to even conform to any specific requirements other than "being epic".

For instance, the linked 8088 demo encodes video in a format that's tightly coupled to both available CPU cycles and available memory bandwidth. Its goal is "display something at 24fps".

Not that I'm a fan of abstraction-for-its-own-sake, but putting scare-quotes around real problems like premature optimization is an excessive counter-reaction.


Up to the ~'60s gave us a vast theoretical foundation, and from then on we toyed with it, endlessly rediscovering it (worst case) or slightly prodding forward (best case), trying to turn this body of knowledge into something useful while accreting it into platforms of code, copper and silicon. My hope is that the next step will eventually be for some of us to stop our prototyping, think about what matters, and build stuff this time, not as a hyperactive yet legacy addicted child, but as a grown up, forward-thinking body that understands it's just not about a funny toy or a monolithic throwaway tool that will end up lasting decades, but a field that has a purpose and a responsibility.

To correct the quote:

Computer science is not an abstract form of engineering. Software (and hardware in the case it's made to run software) engineering is leveraging CS in the context of constraints imposed by reality.

> Any more than is strictly necessary to perform the given task.

Easy to say, but hard to define up front when 'task' is an OS + applications + browser + the hardware that supports it ;-)

This[0] is the typical scenario I'm hoping we would build a habit of doing.

[0]: http://www.folklore.org/StoryView.py?story=Negative_2000_Lin...


> abstraction being excessively glorified (mostly) by academics and formal CS curricula.

It's not just academics, it's many developers, too.

We're in an old-school thread. We like what's really going on. Hang out in the Web Starter Kit from last night though, and you'll find tons of people who glorify abstraction.

The reality is that competing forces spread out the batter in different directions: the abstractionists write Java-like stuff. The old-schoolers exploit subtle non-linearities.

Actual commercial shipments rely on a complex "sandwich" of these opposed practices.

> Demoscene is all about creative, pragmatic ways to solve problems

Yes and I grew up with the demoscene (c64 and amiga 500) and it's also about magic, misdirection, being isolated for long winters and celebrating a peculiar set of values. Focus is shifted toward things that technologists know are possible, such as tight loops running a single algorithm that connects audio or video with pre-rendered data, not on what people want or need, such as CAD software or running mailing lists. Flexibility, integration and portability are eschewed in favor of performance.

Don't get me wrong, I LOVE the demoscene - it's the path that got me to love music. And I have near-total apathy for functional programming. I only code in Javascript when weapons are pointed at my heart, but with the proper balance, there are some very real reasons to make use of abstraction. It's not just academics, it's people solving real problems. The trick is to act strategically with respect to the question: which parts will you optimize and which parts will you offload to inefficient frameworks?


> I think to reverse this there will need to be some very massive changes within CS education.

For instance, starting it elementary school. A surprisingly large amount of the mathematical portion of CS has very little in the way of prerequisites.


Having been in the demoscene (Imphobia) for a long time and having been in more abstract (quad tree construction optimizations) stuff I can say that writing a demo is not the same as computing theory. Writing a demo is most often exploiting a very narrow area of a given technology to produce a seducing effect (more often than not, to fake something thought impossible so that it looks possible). So you're basically constraining the problem to fit your solution.

On the other hand, designing pure algorithms is about figuring a solution for a given, canonical and often unforgiving problem (quicksort, graph colouring ?). To me, this is much harder. It involves quite the same amount of creativity but somehow, it's harder on your brain : no you can't cheat, no you can't linearize n² that easily :-)

To take an example. You can make "convincing" 3D on a C64 in a demo because you can cheat, precalculate, optimize in various way for a given 3D scene. Now, if you want to do the same level of 3D but for a video game where the user can look at your scene from unplanned point of views, then you need to have more flexible algorithms such as BSP trees. So you end up working at the algorithm/abstract level...

A very good middle ground here was Quake's 3D engine. They used the BSP engine and optimized it with regular techniques (and there they used the very smart idea of potentially visible sets) but they also used techniques found in demo's (M. Abrash work on optimizing texture mapping is a nice "cheat" -- and super clever)

Now don't get me wrong, academics is not more impressive than demoscene (but certainly a bit more "useful" for the society as whole) These are just two different problems and there are bright minds that makes super impressive stuff in both of them...

stF


I think to reverse this there will need to be some very massive changes within CS education.

Well, I mean, that is most definitely true regardless. But, with my experience getting my BS in CS a few years ago, it had nothing to do with "mainstream software engineering" either. I had classes on formal logic and automata, algorithms (using CLRS), programming language principles (where we compared the paradigms in Java, Lisp, Prolog, and others), microprocessor design (ASM, Verilog, VHDL), compilers, linear algebra, and so on. Very little in the way of architecting and implementing large, abstracted, real-world business applications or anything remotely web-related. In my experience I did not meet anyone interested in glorifying heaps of whiz-bang abstraction, they seemed to be more in line with the stereotypical "stubbornly resisting all change and new development" camp of academics.


I sense the frustration around this subject is building. What I'm afraid of is that once it boils over into action it will lead to a repetition of moves. That's the hard one, to get a 'fresh start' going is ridiculously easy and one of the reasons we have this mess in the first place.

Very hard to avoid the 'now you have two problems' trap.


Indeed. The problem with starting over is that anything you start over with is going to be simpler, at first. Thus potentially faster, easier, etc, etc.

Rewrites are hard and costly, which is rarely taken into account. Even just maintaining a competent fork is hard enough.

I think it's probably worth the effort, but I'm not quite sure how you get from A to B without just having some super competent eccentric multi-billionaire finance a series of massive development projects.


> I think it's probably worth the effort, but I'm not quite sure how you get from A to B without just having some super competent eccentric multi-billionaire finance a series of massive development projects.

And Elon Musk is busy doing rockets and electric cars!


I think it didn't happen because the people feeling this way are precisely in the situation to understand how vast and hard an undertaking it is, not only to achieve, but also to succeed.

Few have attempted a reboot, yet the zeitgeist is definitely there: ZFS, Wayland, Metal, A7, even TempleOS (or whatever its name is these days). Folks are starting to say themselves 'hey, we built things, we learned a ton, we do feel the result, while useful, is a mess but we now genuinely understand we need to start afresh and how'. It's as if everyone were using LISP on x86 and suddenly realised they might as well use LISP machines.

I too fear we just loop over, yet my hope is that in doing that looping, our field iteratively improves.


I'd answer in two ways: One, it is already happening. The 10M problem (10 million concurrent open network connection) is solved by getting the Linux kernel out of the way and managing your own network stack: http://highscalability.com/blog/2013/5/13/the-secret-to-10-m... - The beauty of their approach is that they still provide a running Linux on the side to manage the non-network hardware so you have a stable base to build and debug upon.

Two, I am not sure we are that much smarter now than we were then. As you have quoted a language problem I'll use one myself as an example. See this SO question: https://stackoverflow.com/questions/24015710/for-loop-over-t... . I wanted to have a "simple" loop over some code instantiating several templates. I say simple, because I had first written the same code in Python and found out it was too slow for my purposes and thus rewrote in in C++. In Python this loop is dead simple to implement, just use a standard for loop over a list of factory functions. In C++ I pay for the high efficiency by turning this same problem in an advanced case of template meta programming that in the end didn't even work out for me because one of the arguments was actually a "template template". And on the other hand, making the C++ meta programming environment more powerful has its own set of problems: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n361...


I'm finding that an inherent psychological part of software development is to accept that nothing will be perfect. Everything is fucked up at some level, and there's no practical way around it. You just bite the bullet.

You stop worrying and learn to love the bomb.


My machine is slower than yours and luajit does your million benchmark in 0.037s

    time luajit -e 'for i=1,100000000 do end'
    
    real	0m0.037s
    user	0m0.034s
    sys         0m0.002s
Just plain old Lua

    time lua -e 'for i=1,100000000 do end'
    
    real	0m0.502s
    user	0m0.497s
    sys         0m0.004s


> "throw more brainpower at it"

Back then there was simply no other way. I remember doing a 3D real-time fly-by of a big architectural development in Amsterdam ("Meervaart") in the 80's. I custom built the machine, pulled a trick where I clocked the fp coprocessor faster than the main processor, had a tseng graphics card (just about as fast as it would go at the time). And all the rest was software, hidden line removal, 800x600 on some primitve beamer at 25 fps. It was the best I could do at the time and it took many weeks to prepare for that demo. Just digitizing the whole neighbourhood was a monks job, I still have the aerial photograph as a souvenir from the job.

I got paid with a rusty old car that I wanted the engine from :)


Wow, as someone who saw some "cutting edge" 3D as a young student in the early 90's, this is beautiful. Weren't the Tseng cards in the 80's pretty much the first consumer cards with features hinting at fmv / 3d ? I was a tad young to know the details, I know their cards in the early 90's were incredible, but I wasn't there for the first Tseng labs stuff. Friends of mine claim that the early Tseng stuff was so impressive they suspected fakery in some of the demos!

Your clocking antics remind me of when I had to match a motherboard / processor to the maximum serial data rate acceptable by an old milling machine. The controlling software was no longer supported, and relied on the clock speed for timing (disastrous for controlling motors / servos etc) so I trialled a bunch of processor / MB combos until the milling machine accepted the output... Involved underclocking a Cyrix Cx something on some unknown brand MB that supported non-standard clock multipliers.

I got paid with a set of 5 year old race skis :-)


I loved the Tseng mostly because of its nice memory map and the fact that the registers weren't very secret. Before that it was "VGA Wonder" (ATI).

The Tseng vesa cards did not do 3D but they were blisteringly fast (for the time) if you knew how to hit them 'just so'. Do everything by the row and avoid bank switches at all cost.

The funny thing is that the driver I wrote for the card was only about 2% or so Tseng specific. gp_wdot, gp_rdot, gp_wrow and gp_rrow were the only routines out of about a 150 or so that were optimized and they were quite short to begin with. And that alone was enough to get very close to maximum bandwidth between the CPU and the graphics memory (this was across the VLB).

I like your clocking trick a lot better than mine, I just soldered an extra socket for an oscillator to the motherbord and ran one wire under the chip to the right pin (and I cut one trace on the motherbord). Plugging in a bunch of oscillators until the FP chip started to behave weird (and then adding a little fan and pushing it some more :) ).

Interesting how those payments worked out.

Now I'm seriously wondering if there is a way in which I could resurrect that demo. No idea what I did with the data, I probably still have the code in some form or a descendant of it.

This was the card I originally wrote the code for:

http://www.vgamuseum.info/index.php/component/content/articl...

But by then I may have upgraded to a et4000 (the 3000 was 16 bit ISA).


Your clocking trick is exactly what I spent 3 weeks swapping CPU / MB combos trying to avoid!

Kudos for actually doing it, and making it work!


Oh trust me if I had had the money I would have happily pursued your route. Cutting a trace on your only working computer and soldering bits & pieces onto the motherboard in order to land a job (talk about risk/reward here, I'm not sure how I would have worked without that machine but I really wanted that engine ;) ) made me pretty nervous. If I could have saved myself that batch of cold sweat I would have happily done so.

What got me is that it did work, I fully expected there to be some level of synchronization between the chips that would require both of them to be clocked at the same rate. The only reason I tried this is that the main CPU appeared to stop working and I figured it was worth a shot to see if the FP could go faster. And it did, and not just a little bit faster! Apparently Intel engineers were quite friendly when they designed the interaction between the two processors because in spite of the huge discrepancy in clock speed between the two chips it worked incredibly well.


> the true limitations of hardware from over 30 years ago are still rather elusive

That was the basic idea that kept the Apple II line alive for ~15 years on an 8 bit processor running at 1Mhz. Of course at the end, there were a handful of faster configurations but the IIgs @ 2.5Mhz and the short lived IIc+ at 4Mhz were the only machines apple produced with faster processors.


why the Apple II was still kept around for that long is kind of a mystery to me. It's not games. Maybe educational customers? Maybe next to no migration path for business users? I had an uncle who ran a veterinarian clinic off of Appleworks and several floppies worth of data for god knows how long. "Works for me" is a powerful force, and they'd probably squeezed all the costs out of the Apple II line.


"Apple II was still kept around for that long is kind of a mystery to me."

There was a very strong following, especially in the educational market. I remember seeing schools purchasing labs of IIGS's as late as the early 1990's.

Basically, the apple ][ was the cash cow that kept Apple afloat for years while they tried to sell 68k macs. Apple basically tried to kill the II for a decade but wasn't successful enough to just cut off the customer base that was crying for new models.


There are many reasons, but one of the big subtleties that should get remembered is that the Apple II had essentially two great epics:

Epic 1: The Apple II sold with no expansion cards, but many expansion slots. Hackers and business designed addons for years.

Epic 2: the Apple IIe (and later IIc) were sold with an optimal set of expansion cards.

So you had one generation of experimentation and a second generation that leveraged all the hard work!


"Hackers and business designed addons for years."

Hackers and "business" continue to design and sell cards for them!

(CompactFlash & USB-storage interface card) http://dreher.net/?s=projects/CFforAppleII&c=projects/CFforA...

(ethernet boards) http://a2retrosystems.com/

(RAM boards) http://www.brielcomputers.com/wordpress/?p=321


See also http://bespin.org/~qz/pc-gpe/fli.for - the .fli format was pretty common at one time and does this same delta encoding.


> real-time dithering

Wasn't dithering done before the encoding? I thought that was the reason he needed ordered dithering.


"This qualifies as art, not just software."

For me there was never any doubt.

A bit unrelated, but I've got an old 5150 at my parent's place, so when I'm visiting next Xmas I'll try to load this demo onto it. The only problem is that of transferring files to it. It only has a 5.25" floppy drive, and I don't have a means to copy files onto those floppies.

Any suggestions?


I have, in the past, been forced to type an Xmodem transfer program into debug.com's hex mode, to get to the point where I can transfer files over a null-modem connection from another box. I can dig up the file in question, if that'd help you out at all.

I ended up typing it in 1k at a time, and independently typing in a CRC32 utility to check that I'd done it properly.

(That was to install Windows 98 on a computer with no drives, if I recall. So, not so very long ago.)


That's how I used to transfer files to my coding buddies.

On the phone, hex dump in S-record format, then read out loud while the other side would type in the line. Checksum matches? Next line...

Our respective parents were not too happy about this unplanned usage of their phone lines but it saved a ton of cycling.


That's how you do it! What's the today's equivalent that kids do?


Yes, I would love to find that specific program - as there are several Xmodem transfer programs out there and I'd like to use one that's not only small in size, but also most likely to work.


You can run Norton Commander on both computers and set them on connect mode. One PC is set as master and the other as slave; connect them physically with a parallel cable.


Good idea - but then you would need Norton Commander, which I don't have for some reason.


KryoFlux is a USB-based floppy controller http://www.kryoflux.com/


Will check it out!


Squirt it in bit by bit down a serial port?


Laplink?


Isn't dithering done offline at encoding time?


Wow... for that hardware, a 4.77MHz 8088 PC with CGA graphics and Sound Blaster audio, those stats are overwhelming:

1. Variable frame-rates up to 60 FPS.

2. Audio rates to 45kHz.

3. 16 colors through composite artifacting.

4. Simultaneous color and B&W output.

On a related note, you will probably be interested in Michael Abrash's Zen of Assembly Language. From the "README.md":

"This is the source for an ebook version of Michael Abrash's Zen of Assembly Language: Volume I, Knowledge, originally published in 1990. Reproduced with blessing of Michael Abrash, converted and maintained by James Gregory. Original conversion produced by Ron Welch."

https://github.com/jagregory/abrash-zen-of-asm


It's worth pointing out (on a quick scan I don't see this called out in the article itself) that the preprocessing involved in generating these executables is almost certainly not meaningfully possible on a 5150 PC.

So while this might run on 1978-era hardware, it wouldn't have been possible for 1978-era hackers to create.


Seems like memory would be the only limiting factor here, i.e. storing both the previous and current frame, computing the difference between the two, then sorting the runs. My hunch is it should be possible with a large enough HDD for swap space (obviously you'd have to swap yourself) and waiting a day to render a short movie.

Edit: and now I realise you need a movie source, which in 1978 means a VHS tape most likely. Reading that and converting it to a sequence of dithered frames (or "just" straight 24-bit 4:4:4 YUV) will definitely need some special hardware.


Incredible work! If only we had this kind of ingenuity today to get a simple graphics card working with Linux! Imagine the possibilities. One day, I might even have an option in Ubuntu to change the refresh rate to 60hz without entering 'xrandr -r 60' into the console EVERY DAMN TIME I REBOOT. Now, I know I'm going on a limb with this next one, but imagine if someone had the intelligence to code a universal installer that works every time and installed every piece of needed software all at once with zero user interaction??! I'm getting a bit craaazy here, but imagine effortless uninstalls! Mind blown.

Edit: "One more thing" as I get voted down by those in denial. Imagine this brilliance getting Linux to talk to a relatively unheard of device called an iPhone 5s! It sure would be nice getting pictures and video off this damn phone so I can free up space!


> imagine if someone had the intelligence to code a universal installer that works every time and installed every piece of needed software all at once with zero user interaction

What's stopping you?


IMHO you are being down voted for being off topic, not necessarily for the accuracy of your thoughts. (I did not downvote you BTW, I take pitty on gray comments)


The tone and innacuracy certainly didn't help.



> getting Linux to talk to a relatively unheard of device called an iPhone 5s! It sure would be nice getting pictures and video off this damn phone so I can free up space!

I do it all the time. All it takes is to plug the phone in.


Answering the dead question, it's an iPhone 5s running the latest iOS. The only catch is that I have to plug it unlocked and tell it to trust my computer (running Ubuntu 14.04) when the prompt pops up. It imports pictures into Shotwell just fine. I also tested it with an iPhone 4 (not 4S) and it worked just the same.


The work is very impressive, but the part I loved most was how even after deciding the problem was "impossible" to solve, he kept at it:

> "Then I thought about the problem for 7 years."


In the demoscene, doing the impossible was just regular behaviour :-) Just look at what they get out of a mere C64...


The VIC-II and SID chips in the C64 are amazing. They blew away anything that could be found inside an IBM PC.


I'm a softy, but I still think the SID is amazing. The sounds that developers squeezed out of that chip are a testament to true hackery. I have a feeling the tricks developed to stress the SID to it's maximum have been adopted by a lot of serious audio developers in recent years to get higher track count / lower latency / higher bit rate etc. It's like the SID was the home-chemistry set equivalent for many a professional DAW / plugin designer.



Wow - Trixter from Hornet - haven't heard that name in a long time. Always fun to dip back into the demoscene every now and then. Think I might just break out my Mindcandy dvd tonight.


At first I wanted to link to C64 productions such as this one https://www.youtube.com/watch?feature=player_detailpage&v=gG... but then I realized that the IBM PC was vastly worse designed than the C64. On the CPU, although running at whopping 4.77 MHz, trivial operations takes loads of cycles. The graphics memory sits on an ISA slot, 8-bit path and few MHz data-rate.

So, well done doing this on a machine that weak!


For perspective, it was still difficult doing video in the year 1999... I still remember the lengths some people had to go, to get MPEG working on the Nintendo 64: http://web.archive.org/web/20080331015735/http://www.gamasut...



The author mentions getting involved with Video for Windows in the early '90s. For a laugh, here's some official VfW sample video from 1992 for comparison: https://www.youtube.com/watch?v=b4ieKNtZ8yY


8088 Domination is one of the examples of truest hacking and deserves to be sticky on HN. I mean it!


This is truly impressive, I'm amazed by how fast it renders. So I guess Mr. Gates was right, you only need 64k...


> So I guess Mr. Gates was right, you only need 64k...

That was 640K.


And it wasn't Bill Gates who said that.


and no more than 4.77mhz.


"I think there is a world market for maybe five computers" - Thomas J. Watson, 1943.

http://en.m.wikipedia.org/wiki/Thomas_J._Watson


The PC I ever used was an Amstrad 8086 CGA, seeing the 4 colour palette 1 again made me super nostalgic. Somehow back then playing games with only black, white, magenta and cyan didn't bother me.


The Amstrad PC1512/PC1640 actually had this quirky 640x200x16 color mode. I only ever came across one piece of software except the bundled GEM that supported it though...


Wiz / Imphobia here.. Damn, that is dedication. I didn't know there was a 160x200 X 4 bits resolution on the CGA card. How was that activated ?


Wikipedia has a bit under "special effects on composite monitors" http://en.m.wikipedia.org/wiki/Color_Graphics_Adapter


forget about it, the answer to my question was written below my post :-) kudo for the video on the XT !


You might also be interested in these efforts to get video playing on the Sega Genesis (https://www.youtube.com/watch?v=2vPe452cegU) and the TI-84 graphing calculator (https://www.youtube.com/watch?v=6pAeWf3NPNU)


If someone in 1981 had seen this running on the same hardware they would have assumed it was alien software created by a higher lever species.


I never knew CGA could do 16 colors. In my memory 16 colors is EGA.


That caught my attention as well. I wish I knew this trick 3 decades ago.. http://en.wikipedia.org/wiki/Color_Graphics_Adapter#160.C3.9...


Interstingly they talk about the "change palette in the middle of the screen" tricks that we used in Imphobia. The precision timing was certainly tricky, especially when playing a MOD-file while drawing ! Ahhh memories...


Try changing modes in the middle of the scan :)


Argh ! was that possible ? I actually stop playing with that with my last overscan "320 x +/- 240" attempt during which, for some reason, the screen beamer concentrated itself on exactly on scan line of the screen, rendering it super bright and emitting a super scary sound. My screen always had a darker line in the middle of it since that experiment :-( You could actually damage things by playing with hardware...


> Argh ! was that possible ?

Sure, if you were prepared to give up a few scanlines for the register changes. The monitor will happily continue to scan as long as the basics (vertical resolution, frame rate) don't change and you make the the coils are still being swept.

That's why you ended up with that darker scanline, for a brief time the vertical deflection was turned off and that caused that one scanline to be hit by the electron beam in rapid succession at an intensity that it normally would not receive.

It's like looking into the sun.

Scanning is the hard part, so you don't need to worry too much if you keep the timing steady but you can change things like colours, palette contents, horizontal resolution without too much trouble.

If you're going to mess with the vertical resolution then you'll have to have write access to the register that counts the scanlines (and you'll need to set it to what it would have been had the whole screen be that resolution).

And of course at the end of the frame you have to switch it all back.


My family had a CGA pc when I was young, and while it could do 16 colours, the horribly low resolution of the 16 colour mode meant very little ever made use of it.


It's not that horrible. You can do some pretty interesting things with it, like this: https://www.youtube.com/watch?v=G66KL-hxxKI


King's Quest I did 16 colors on CGA


CGA wasn't natively capable of displaying 16 distinct colors in its higher-resolution modes. KQ1 - and a lot of games in that era - did 16 colors on CGA's composite mode by actually exploiting the peculiarities of NTSC video to generate color artifacts on the screen. If you were to view the same video on an RGB monitor, what you'd actually see would usually be a monochrome screen filled with varying patterns of narrow horizontal and vertical stripes.

See: http://en.wikipedia.org/wiki/Color_Graphics_Adapter#Special_...


I imagine that part 2 is effectively going to involve "compiling" the sequences to assembly and executing them. You just don't have a lot of cycles to do much math on the 8088, so you may as well just write compile this as a big assembly program and start running it.

Looking forward to part 2 very much.



This reminds me of Yoomp!

http://yoomp.atari.pl/media.htm

1.79 MHz 6502..


Second (and the last) part of trixter's write-up on how 8088 Domination was achieved is already up there.

8088 Domination Post-Mortem, Conclusion https://news.ycombinator.com/item?id=7924928


As a person who started with 24-bit colors, I only have a lot of respect for Jim.


The Spaceballs references almost makes me cry of nostalgia. Thanks!


I can't wait to try this out. I still have my 4.77 MHz 8088 IBM PC with CGA video.

I don't have a monitor for it anymore, but that is fine since this was designed for the composite out anyway which I can put on a TV.

And I need to find an 8-bit SoundBlaster to put back in it.

Nice to see trixter doing stuff. I remember him from demoscene stuff in the 90s. Back then, PC demos were for 386/486/Pentium and VGA graphics. Nobody bothered with PC or XT (or CGA or EGA graphics) even back then.


The references to Spaceballs left a nostalgic tear in my eye.


It's fascinating that the developer basically reinvented RLE, but did it significantly better.


When you can adapt the problem to the solution, the solution becomes much easier. Real-world problems are rarely so adaptable, first issue being "code from bare metal up" is rarely an option.


This makes me want to run this on my 8088. Does it fit on a 10 MB HD? Also, I wonder if any of the old floppies I have still work... I haven't turned the thing on in over a decade.


I wonder how well PC emulators will render this :-)


I don't know of any accurate IBM PC emulator. Most of the typical ones like DosBox will play it either too fast or too slow.


MESS and PCem should be accurate enough.


Dosbox plays it fine, but only B&W.


With mode=cga it plays in color, though some of the colors are inaccurate compared to a real PC.


Wow, this is the first time I've seen Bad Apple on HN. That alone is enough to earn my upvote!


This looks like reimplementation on animated GIF.


That is not at all like how GIF animation works.

http://en.m.wikipedia.org/wiki/Graphics_Interchange_Format#A...


Umm yes it is?

"Some economy of data is possible where a frame need only rewrite a portion of the pixels of the display, because the Image Descriptor can define a smaller rectangle to be rescanned instead of the whole image"

Every frame of animated gif can choose to modify small portion of previously drawn image. This is why you cant display animated gif starting in the middle, you will only render moving parts until you loop whole thing.

This is precisely what Author implemented, he is encoding changes between frames, this way cpu has to only modify parts of display memory that are changing.

How is that not the way animated gif works?


This is delta coding; it's part of animated GIF, which then applies LZW compression to the deltas. This guy has used RLE compression (like TIFF, I believe) instead.


Amazing!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: