That's absolutely insane at that clockrate. The way I would get 'animations' (for want of a better term) done is by rendering them frame-by-frame, compressing that and then playing it back at high speed. And even that was next to impossible. Decompressing video @60fps, and doing real-time dithering to increase the effective number of colours and still have time enough for 45KHz audio is totally nuts. This qualifies as art, not just software.
For me, the most interesting part is that his solution - updating only the changed parts between each frame and the previous one, and approximating the changes so that they're not (too) visually perceptible in order to satisfy a bitrate constraint - is one of the ways that modern video codecs achieve their compression.
I agree it's also amazing that apparently, the true limitations of hardware from over 30 years ago are still rather elusive... this is the complete opposite of the "throw more hardware at it" attitude towards most software problems today, but instead it's "throw more brainpower at it".
The more I progress in our domain of expertise, the more I observe we're being incredibly wasteful† all over the place. For all the expressiveness power of our platforms and languages it somehow sounds insane that time (ruby -e '100_000_000.times {}') takes four solid seconds on my 3.4GHz machine††. I know, bogoMIPS are no benchmark, this is just to exemplify that layers of abstraction, while useful (necessary even), are also harmful, the underlying question being: how much layers is too much layers?
I dream of a system redesigned from the ground up, where hardware and software components, while conceptually isolated, cooperate instead of segregating each other to layers. See how ZFS made previously segregated layers cooperate to offer a robust system, see how TRIM operates on the lowest hardware levels by notifying of filesystem events, see how OSI levels get pierced through for QoS and reliability concerns. Notice how the increase in layers and thus holistic complexity rampantly leads to more bugs, more vulnerabilities, more energy wasted. We all know the fastest code is the one that does not execute, the most robust code is the one that doesn't get written, the most secure code is the one that doesn't exist. Why do I still see redraws and paintings and flashes in 2014? Why does a determined adversary has such a statistical advantage that he is almost guaranteed toget a foothold into my system? This is completely unacceptable. For as much as we love playing with it, the whole web stack, while a significant civilization milestone, is, as a whole, a massive technological failure (the native stack barely fares better).
† I consider wasteful and bloated subtly distinct
†† not at all an attack on Ruby, just what I happen to have at hand right now
I think the underlying cause of this overabstraction is largely a result of abstraction being excessively glorified (mostly) by academics and formal CS curricula. In some ways, it's similar to the OOP overuse that has thankfully decreased somewhat recently but was extremely prevalent throughout the 90s. In software engineering, we're constantly subjected to messages like: Abstraction is good. Abstraction is powerful. Abstraction is the way to solve problems. More abstraction is better. Even in the famously acclaimed SICP lecture series [1] there is this quote:
"So in that sense computer science is like an abstract form of engineering. It's the kind of engineering where you ignore the constraints that are imposed by reality."
There is an implication that we should be building more complex software just because we can, since that is somehow "better". Efficiency is only thought of in strictly algorithmic terms, constants are ignored, and we're almost taught that thinking about efficiency should be discouraged unless absolutely necessary because it's "premature optimisation". The (rapidly coming to an end) exponential growth of hardware power made this attitude acceptable, and lower-level knowledge of hardware (or just simple things like binary/bit fields) is undervalued "because we have these layers of abstraction" - often leading to adding another layer on top just to reinvent things that could be easily accomplished at a lower level.
The fact that many of those in the demoscene who produce amazing results yet have never formally studied computer science leads me to believe that there's a certain amount of indoctrination happening, and I think to reverse this there will need to be some very massive changes within CS education. Demoscene is all about creative, pragmatic ways to solve problems by making the most of available resources, and that often leads to very simple and elegant solutions, which is something that should definitely be encouraged more in mainstream software engineering. Instead the latter seem more interested in building large, absurdly complex, baroque architectures to solve simple problems.
Maybe the "every byte and clock cycle counts" attitude might not be ideal either for all problems, but not thinking at all about the amount of resources really needed to do something is worse.
> how much layers is too much layers?
Any more than is strictly necessary to perform the given task.
"Demoscene is all about creative, pragmatic ways to solve problems by making the most of available resources"
It probably doesn't hurt that nobody expects a demo scene app to adapt to radical changes in requirements, or to interoperate with other things that are changing as well - for that matter, to even conform to any specific requirements other than "being epic".
For instance, the linked 8088 demo encodes video in a format that's tightly coupled to both available CPU cycles and available memory bandwidth. Its goal is "display something at 24fps".
Not that I'm a fan of abstraction-for-its-own-sake, but putting scare-quotes around real problems like premature optimization is an excessive counter-reaction.
Up to the ~'60s gave us a vast theoretical foundation, and from then on we toyed with it, endlessly rediscovering it (worst case) or slightly prodding forward (best case), trying to turn this body of knowledge into something useful while accreting it into platforms of code, copper and silicon. My hope is that the next step will eventually be for some of us to stop our prototyping, think about what matters, and build stuff this time, not as a hyperactive yet legacy addicted child, but as a grown up, forward-thinking body that understands it's just not about a funny toy or a monolithic throwaway tool that will end up lasting decades, but a field that has a purpose and a responsibility.
To correct the quote:
Computer science is not an abstract form of engineering. Software (and hardware in the case it's made to run software) engineering is leveraging CS in the context of constraints imposed by reality.
> Any more than is strictly necessary to perform the given task.
Easy to say, but hard to define up front when 'task' is an OS + applications + browser + the hardware that supports it ;-)
This[0] is the typical scenario I'm hoping we would build a habit of doing.
> abstraction being excessively glorified (mostly) by academics and formal CS curricula.
It's not just academics, it's many developers, too.
We're in an old-school thread. We like what's really going on. Hang out in the Web Starter Kit from last night though, and you'll find tons of people who glorify abstraction.
The reality is that competing forces spread out the batter in different directions: the abstractionists write Java-like stuff. The old-schoolers exploit subtle non-linearities.
Actual commercial shipments rely on a complex "sandwich" of these opposed practices.
> Demoscene is all about creative, pragmatic ways to solve problems
Yes and I grew up with the demoscene (c64 and amiga 500) and it's also about magic, misdirection, being isolated for long winters and celebrating a peculiar set of values. Focus is shifted toward things that technologists know are possible, such as tight loops running a single algorithm that connects audio or video with pre-rendered data, not on what people want or need, such as CAD software or running mailing lists. Flexibility, integration and portability are eschewed in favor of performance.
Don't get me wrong, I LOVE the demoscene - it's the path that got me to love music. And I have near-total apathy for functional programming. I only code in Javascript when weapons are pointed at my heart, but with the proper balance, there are some very real reasons to make use of abstraction. It's not just academics, it's people solving real problems. The trick is to act strategically with respect to the question: which parts will you optimize and which parts will you offload to inefficient frameworks?
> I think to reverse this there will need to be some very massive changes within CS education.
For instance, starting it elementary school. A surprisingly large amount of the mathematical portion of CS has very little in the way of prerequisites.
Having been in the demoscene (Imphobia) for a long time and having been in more abstract (quad tree construction optimizations) stuff I can say that writing a demo is not the same as computing theory. Writing a demo is most often exploiting a very narrow area of a given technology to produce a seducing effect (more often than not, to fake something thought impossible so that it looks possible). So you're basically constraining the problem to fit your solution.
On the other hand, designing pure algorithms is about figuring a solution for a given, canonical and often unforgiving problem (quicksort, graph colouring ?). To me, this is much harder. It involves quite the same amount of creativity but somehow, it's harder on your brain : no you can't cheat, no you can't linearize n² that easily :-)
To take an example. You can make "convincing" 3D on a C64 in a demo because you can cheat, precalculate, optimize in various way for a given 3D scene. Now, if you want to do the same level of 3D but for a video game where the user can look at your scene from unplanned point of views, then you need to have more flexible algorithms such as BSP trees. So you end up working at the algorithm/abstract level...
A very good middle ground here was Quake's 3D engine. They used the BSP engine and optimized it with regular techniques (and there they used the very smart idea of potentially visible sets) but they also used techniques found in demo's (M. Abrash work on optimizing texture mapping is a nice "cheat" -- and super clever)
Now don't get me wrong, academics is not more impressive than demoscene (but certainly a bit more "useful" for the society as whole) These are just two different problems and there are bright minds that makes super impressive stuff in both of them...
I think to reverse this there will need to be some very massive changes within CS education.
Well, I mean, that is most definitely true regardless. But, with my experience getting my BS in CS a few years ago, it had nothing to do with "mainstream software engineering" either. I had classes on formal logic and automata, algorithms (using CLRS), programming language principles (where we compared the paradigms in Java, Lisp, Prolog, and others), microprocessor design (ASM, Verilog, VHDL), compilers, linear algebra, and so on. Very little in the way of architecting and implementing large, abstracted, real-world business applications or anything remotely web-related. In my experience I did not meet anyone interested in glorifying heaps of whiz-bang abstraction, they seemed to be more in line with the stereotypical "stubbornly resisting all change and new development" camp of academics.
I sense the frustration around this subject is building. What I'm afraid of is that once it boils over into action it will lead to a repetition of moves. That's the hard one, to get a 'fresh start' going is ridiculously easy and one of the reasons we have this mess in the first place.
Very hard to avoid the 'now you have two problems' trap.
Indeed. The problem with starting over is that anything you start over with is going to be simpler, at first. Thus potentially faster, easier, etc, etc.
Rewrites are hard and costly, which is rarely taken into account. Even just maintaining a competent fork is hard enough.
I think it's probably worth the effort, but I'm not quite sure how you get from A to B without just having some super competent eccentric multi-billionaire finance a series of massive development projects.
> I think it's probably worth the effort, but I'm not quite sure how you get from A to B without just having some super competent eccentric multi-billionaire finance a series of massive development projects.
And Elon Musk is busy doing rockets and electric cars!
I think it didn't happen because the people feeling this way are precisely in the situation to understand how vast and hard an undertaking it is, not only to achieve, but also to succeed.
Few have attempted a reboot, yet the zeitgeist is definitely there: ZFS, Wayland, Metal, A7, even TempleOS (or whatever its name is these days). Folks are starting to say themselves 'hey, we built things, we learned a ton, we do feel the result, while useful, is a mess but we now genuinely understand we need to start afresh and how'. It's as if everyone were using LISP on x86 and suddenly realised they might as well use LISP machines.
I too fear we just loop over, yet my hope is that in doing that looping, our field iteratively improves.
I'd answer in two ways: One, it is already happening. The 10M problem (10 million concurrent open network connection) is solved by getting the Linux kernel out of the way and managing your own network stack: http://highscalability.com/blog/2013/5/13/the-secret-to-10-m... - The beauty of their approach is that they still provide a running Linux on the side to manage the non-network hardware so you have a stable base to build and debug upon.
Two, I am not sure we are that much smarter now than we were then. As you have quoted a language problem I'll use one myself as an example. See this SO question: https://stackoverflow.com/questions/24015710/for-loop-over-t... . I wanted to have a "simple" loop over some code instantiating several templates. I say simple, because I had first written the same code in Python and found out it was too slow for my purposes and thus rewrote in in C++. In Python this loop is dead simple to implement, just use a standard for loop over a list of factory functions. In C++ I pay for the high efficiency by turning this same problem in an advanced case of template meta programming that in the end didn't even work out for me because one of the arguments was actually a "template template". And on the other hand, making the C++ meta programming environment more powerful has its own set of problems: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n361...
I'm finding that an inherent psychological part of software development is to accept that nothing will be perfect. Everything is fucked up at some level, and there's no practical way around it. You just bite the bullet.
Back then there was simply no other way. I remember doing a 3D real-time fly-by of a big architectural development in Amsterdam ("Meervaart") in the 80's. I custom built the machine, pulled a trick where I clocked the fp coprocessor faster than the main processor, had a tseng graphics card (just about as fast as it would go at the time). And all the rest was software, hidden line removal, 800x600 on some primitve beamer at 25 fps. It was the best I could do at the time and it took many weeks to prepare for that demo. Just digitizing the whole neighbourhood was a monks job, I still have the aerial photograph as a souvenir from the job.
I got paid with a rusty old car that I wanted the engine from :)
Wow, as someone who saw some "cutting edge" 3D as a young student in the early 90's, this is beautiful. Weren't the Tseng cards in the 80's pretty much the first consumer cards with features hinting at fmv / 3d ? I was a tad young to know the details, I know their cards in the early 90's were incredible, but I wasn't there for the first Tseng labs stuff. Friends of mine claim that the early Tseng stuff was so impressive they suspected fakery in some of the demos!
Your clocking antics remind me of when I had to match a motherboard / processor to the maximum serial data rate acceptable by an old milling machine. The controlling software was no longer supported, and relied on the clock speed for timing (disastrous for controlling motors / servos etc) so I trialled a bunch of processor / MB combos until the milling machine accepted the output... Involved underclocking a Cyrix Cx something on some unknown brand MB that supported non-standard clock multipliers.
I loved the Tseng mostly because of its nice memory map and the fact that the registers weren't very secret. Before that it was "VGA Wonder" (ATI).
The Tseng vesa cards did not do 3D but they were blisteringly fast (for the time) if you knew how to hit them 'just so'. Do everything by the row and avoid bank switches at all cost.
The funny thing is that the driver I wrote for the card was only about 2% or so Tseng specific. gp_wdot, gp_rdot, gp_wrow and gp_rrow were the only routines out of about a 150 or so that were optimized and they were quite short to begin with. And that alone was enough to get very close to maximum bandwidth between the CPU and the graphics memory (this was across the VLB).
I like your clocking trick a lot better than mine, I just soldered an extra socket for an oscillator to the motherbord and ran one wire under the chip to the right pin (and I cut one trace on the motherbord). Plugging in a bunch of oscillators until the FP chip started to behave weird (and then adding a little fan and pushing it some more :) ).
Interesting how those payments worked out.
Now I'm seriously wondering if there is a way in which I could resurrect that demo. No idea what I did with the data, I probably still have the code in some form or a descendant of it.
This was the card I originally wrote the code for:
Oh trust me if I had had the money I would have happily pursued your route. Cutting a trace on your only working computer and soldering bits & pieces onto the motherboard in order to land a job (talk about risk/reward here, I'm not sure how I would have worked without that machine but I really wanted that engine ;) ) made me pretty nervous. If I could have saved myself that batch of cold sweat I would have happily done so.
What got me is that it did work, I fully expected there to be some level of synchronization between the chips that would require both of them to be clocked at the same rate. The only reason I tried this is that the main CPU appeared to stop working and I figured it was worth a shot to see if the FP could go faster. And it did, and not just a little bit faster! Apparently Intel engineers were quite friendly when they designed the interaction between the two processors because in spite of the huge discrepancy in clock speed between the two chips it worked incredibly well.
> the true limitations of hardware from over 30 years ago are still rather elusive
That was the basic idea that kept the Apple II line alive for ~15 years on an 8 bit processor running at 1Mhz. Of course at the end, there were a handful of faster configurations but the IIgs @ 2.5Mhz and the short lived IIc+ at 4Mhz were the only machines apple produced with faster processors.
why the Apple II was still kept around for that long is kind of a mystery to me.
It's not games. Maybe educational customers?
Maybe next to no migration path for business users? I had an uncle who ran a veterinarian clinic off of Appleworks and several floppies worth of data for god knows how long.
"Works for me" is a powerful force, and they'd probably squeezed all the costs out of the Apple II line.
"Apple II was still kept around for that long is kind of a mystery to me."
There was a very strong following, especially in the educational market. I remember seeing schools purchasing labs of IIGS's as late as the early 1990's.
Basically, the apple ][ was the cash cow that kept Apple afloat for years while they tried to sell 68k macs. Apple basically tried to kill the II for a decade but wasn't successful enough to just cut off the customer base that was crying for new models.
A bit unrelated, but I've got an old 5150 at my parent's place, so when I'm visiting next Xmas I'll try to load this demo onto it. The only problem is that of transferring files to it. It only has a 5.25" floppy drive, and I don't have a means to copy files onto those floppies.
I have, in the past, been forced to type an Xmodem transfer program into debug.com's hex mode, to get to the point where I can transfer files over a null-modem connection from another box. I can dig up the file in question, if that'd help you out at all.
I ended up typing it in 1k at a time, and independently typing in a CRC32 utility to check that I'd done it properly.
(That was to install Windows 98 on a computer with no drives, if I recall. So, not so very long ago.)
Yes, I would love to find that specific program - as there are several Xmodem transfer programs out there and I'd like to use one that's not only small in size, but also most likely to work.
You can run Norton Commander on both computers and set them on connect mode. One PC is set as master and the other as slave; connect them physically with a parallel cable.
Wow... for that hardware, a 4.77MHz 8088 PC with CGA graphics and Sound Blaster audio, those stats are overwhelming:
1. Variable frame-rates up to 60 FPS.
2. Audio rates to 45kHz.
3. 16 colors through composite artifacting.
4. Simultaneous color and B&W output.
On a related note, you will probably be interested in Michael Abrash's Zen of Assembly Language. From the "README.md":
"This is the source for an ebook version of Michael Abrash's Zen of Assembly Language: Volume I, Knowledge, originally published in 1990. Reproduced
with blessing of Michael Abrash, converted and maintained by James Gregory. Original conversion produced by Ron Welch."
It's worth pointing out (on a quick scan I don't see this called out in the article itself) that the preprocessing involved in generating these executables is almost certainly not meaningfully possible on a 5150 PC.
So while this might run on 1978-era hardware, it wouldn't have been possible for 1978-era hackers to create.
Seems like memory would be the only limiting factor here, i.e. storing both the previous and current frame, computing the difference between the two, then sorting the runs. My hunch is it should be possible with a large enough HDD for swap space (obviously you'd have to swap yourself) and waiting a day to render a short movie.
Edit: and now I realise you need a movie source, which in 1978 means a VHS tape most likely. Reading that and converting it to a sequence of dithered frames (or "just" straight 24-bit 4:4:4 YUV) will definitely need some special hardware.
Incredible work! If only we had this kind of ingenuity today to get a simple graphics card working with Linux! Imagine the possibilities. One day, I might even have an option in Ubuntu to change the refresh rate to 60hz without entering 'xrandr -r 60' into the console EVERY DAMN TIME I REBOOT. Now, I know I'm going on a limb with this next one, but imagine if someone had the intelligence to code a universal installer that works every time and installed every piece of needed software all at once with zero user interaction??! I'm getting a bit craaazy here, but imagine effortless uninstalls! Mind blown.
Edit: "One more thing" as I get voted down by those in denial. Imagine this brilliance getting Linux to talk to a relatively unheard of device called an iPhone 5s! It sure would be nice getting pictures and video off this damn phone so I can free up space!
> imagine if someone had the intelligence to code a universal installer that works every time and installed every piece of needed software all at once with zero user interaction
IMHO you are being down voted for being off topic, not necessarily for the accuracy of your thoughts. (I did not downvote you BTW, I take pitty on gray comments)
> getting Linux to talk to a relatively unheard of device called an iPhone 5s! It sure would be nice getting pictures and video off this damn phone so I can free up space!
I do it all the time. All it takes is to plug the phone in.
Answering the dead question, it's an iPhone 5s running the latest iOS. The only catch is that I have to plug it unlocked and tell it to trust my computer (running Ubuntu 14.04) when the prompt pops up. It imports pictures into Shotwell just fine. I also tested it with an iPhone 4 (not 4S) and it worked just the same.
I'm a softy, but I still think the SID is amazing. The sounds that developers squeezed out of that chip are a testament to true hackery. I have a feeling the tricks developed to stress the SID to it's maximum have been adopted by a lot of serious audio developers in recent years to get higher track count / lower latency / higher bit rate etc. It's like the SID was the home-chemistry set equivalent for many a professional DAW / plugin designer.
Wow - Trixter from Hornet - haven't heard that name in a long time. Always fun to dip back into the demoscene every now and then. Think I might just break out my Mindcandy dvd tonight.
At first I wanted to link to C64 productions such as this one https://www.youtube.com/watch?feature=player_detailpage&v=gG... but then I realized that the IBM PC was vastly worse designed than the C64. On the CPU, although running at whopping 4.77 MHz, trivial operations takes loads of cycles. The graphics memory sits on an ISA slot, 8-bit path and few MHz data-rate.
The author mentions getting involved with Video for Windows in the early '90s. For a laugh, here's some official VfW sample video from 1992 for comparison: https://www.youtube.com/watch?v=b4ieKNtZ8yY
The PC I ever used was an Amstrad 8086 CGA, seeing the 4 colour palette 1 again made me super nostalgic. Somehow back then playing games with only black, white, magenta and cyan didn't bother me.
The Amstrad PC1512/PC1640 actually had this quirky 640x200x16 color mode. I only ever came across one piece of software except the bundled GEM that supported it though...
Interstingly they talk about the "change palette in the middle of the screen" tricks that we used in Imphobia. The precision timing was certainly tricky, especially when playing a MOD-file while drawing ! Ahhh memories...
Argh ! was that possible ? I actually stop playing with that with my last overscan "320 x +/- 240" attempt during which, for some reason, the screen beamer concentrated itself on exactly on scan line of the screen, rendering it super bright and emitting a super scary sound. My screen always had a darker line in the middle of it since that experiment :-( You could actually damage things by playing with hardware...
Sure, if you were prepared to give up a few scanlines for the register changes. The monitor will happily continue to scan as long as the basics (vertical resolution, frame rate) don't change and you make the the coils are still being swept.
That's why you ended up with that darker scanline, for a brief time the vertical deflection was turned off and that caused that one scanline to be hit by the electron beam in rapid succession at an intensity that it normally would not receive.
It's like looking into the sun.
Scanning is the hard part, so you don't need to worry too much if you keep the timing steady but you can change things like colours, palette contents, horizontal resolution without too much trouble.
If you're going to mess with the vertical resolution then you'll have to have write access to the register that counts the scanlines (and you'll need to set it to what it would have been had the whole screen be that resolution).
And of course at the end of the frame you have to switch it all back.
My family had a CGA pc when I was young, and while it could do 16 colours, the horribly low resolution of the 16 colour mode meant very little ever made use of it.
CGA wasn't natively capable of displaying 16 distinct colors in its higher-resolution modes. KQ1 - and a lot of games in that era - did 16 colors on CGA's composite mode by actually exploiting the peculiarities of NTSC video to generate color artifacts on the screen. If you were to view the same video on an RGB monitor, what you'd actually see would usually be a monochrome screen filled with varying patterns of narrow horizontal and vertical stripes.
I imagine that part 2 is effectively going to involve "compiling" the sequences to assembly and executing them. You just don't have a lot of cycles to do much math on the 8088, so you may as well just write compile this as a big assembly program and start running it.
I can't wait to try this out. I still have my 4.77 MHz 8088 IBM PC with CGA video.
I don't have a monitor for it anymore, but that is fine since this was designed for the composite out anyway which I can put on a TV.
And I need to find an 8-bit SoundBlaster to put back in it.
Nice to see trixter doing stuff. I remember him from demoscene stuff in the 90s. Back then, PC demos were for 386/486/Pentium and VGA graphics. Nobody bothered with PC or XT (or CGA or EGA graphics) even back then.
When you can adapt the problem to the solution, the solution becomes much easier. Real-world problems are rarely so adaptable, first issue being "code from bare metal up" is rarely an option.
This makes me want to run this on my 8088. Does it fit on a 10 MB HD? Also, I wonder if any of the old floppies I have still work... I haven't turned the thing on in over a decade.
"Some economy of data is possible where a frame need only rewrite a portion of the pixels of the display, because the Image Descriptor can define a smaller rectangle to be rescanned instead of the whole image"
Every frame of animated gif can choose to modify small portion of previously drawn image. This is why you cant display animated gif starting in the middle, you will only render moving parts until you loop whole thing.
This is precisely what Author implemented, he is encoding changes between frames, this way cpu has to only modify parts of display memory that are changing.
This is delta coding; it's part of animated GIF, which then applies LZW compression to the deltas. This guy has used RLE compression (like TIFF, I believe) instead.