Hacker News new | past | comments | ask | show | jobs | submit login
F***ing Learn to Code Again (1999) (kebby.org)
144 points by jeffreyrogers on March 23, 2015 | hide | past | favorite | 64 comments



This attitude is fairly common amongst the real old timers of the demoscene. While there is of course a bit of viewing the past through rose tinted glasses, I think it's definitely grounded in truth.

The reason for that is that demo programming is the coding equivalent of watch making: extremely precise, technical work that takes deep knowledge and long hours to achieve something notable. Knowing your tools and platform inside out is a necessity - and the more modern your CPU/GPU are, the less realistic that is. When I was writing assembly for my 8MHz z80, I could easily hold the full instruction set + CPU register layout etc in working memory. When you're working on a modern i7 + GTX780, that's pretty much impossible, and you have to resort to abstraction layers and tools that inevitably lead to the situation described by the author of the article. It's a classic case of extreme restraints working in the favor of creativity.

Interestingly enough, my few friends who are still into coding demos don't do it on modern architectures - they're mostly sticking to the C64.


Sorry to ask but, what do you and the author refer to with 'demos'?


I'll quote wikipedia here:

    The demoscene is an international computer art subculture that specializes
    in producing demos: small, self-contained computer programs that produce
    audio-visual presentations. The main goal of a demo is to show off
    programming, artistic, and musical skills.
Basically, these programs are extremely small, but produce interesting and impressive visual or audio experiences. A popular example would be "fr-041: debris. by farbrausch": https://www.youtube.com/watch?v=mxfmxi-boyo



If you want to watch some of the demos you can check out http://www.demoscene.tv/


Please don't. demoscene.tv is extremely outdated. Youtube is full of excellent quality 60fps video captures of demos. Here's a link to get started: https://www.youtube.com/results?search_query=60fps+demoscene


I found that site hard to navigate, and a bit outdated.

For anyone who want a very spesific example of how cool demos can be, here are some of my favorites:

https://www.youtube.com/watch?v=wXMs54NUBOI

https://www.youtube.com/watch?v=xOYpKnl1B_g

https://www.youtube.com/watch?v=OO6dtjxbHzs

Going to see these demo-competitons in Norway in my childhood was certainly a big inspiration to learn coding.


On a side note: the rise of the 1k, 4k and 8k sub-"scenes" is likely a response, create artificial constraints to enable creativity.


I think the JS k demos are pretty interesting to say the least... it's amazing what can be done with a few K.. though the original vs. final code doesn't always bear much resemblance.

The point is, too many people don't spend any* time thinking of how what they write is inefficient or not... Even mobile devices are becoming mobile powerhouses. It's a bit of a shame really.


I found this to be a great reminder of how far we've come in 16 years, but it's also a reminder of how little some of the basics have changed.

We still advocate going down to A lower language for the fast boys, but we can more or less forget about having to write ASM for all but the most complicated algorithms.

We still fight platform differences, and DirectX is still one of the most powerful and developer friendly frameworks.

Low level knowledge is still mocked by some individuals, yet C++ is still the God of graphics and video games.


In that 16 years, we got new (and mostly unnecessary) layers of abstraction on what existed previously. Whole single-page-application crowd (eg. Sencha, React...) is busy reinventing things that already exists three layers of abstraction down from the level on which they code.

DirectX was interesting in that age because of exactly the thing in article: you can allocate sound buffer and one surface and view that two COM objects as hardware and not care much about performance implications of the abstraction in between. Once you start drawing to (to borrow X11 terms, as I'm not too familiar with GDI/DirectX) IndexColor surface that is shown on TrueColor display, you get slowdowns in the abstraction layer on the order of "2005 highend Windows NT workstation renders at 2FPS what random office computer from '94 on DOS rendered at 60FPS" (see OpenTTD for example of exactly this performance regression).

For last 25 years hardware acceleration for graphics is readily available for all performance tiers but still, you need to have quite low-level understanding of how the hardware works to get reasonable performance (and as for working-around hardware and driver bugs, we are currently in mostly same situation as 25 years ago and it was better in the meantime).


>> Whole single-page-application crowd (eg. Sencha, React...) is busy reinventing things that already exists three layers of abstraction down from the level on which they code.

Except, you know, that part about running on the open web. The web is a deployment platform. Developers want to write the least amount of code to be able to support the most users possible. That was supposed to be Java, but it didn't work out that way. Deployment is still a platform-specific issue in Java, because users--for whatever reason--hate WebStart and its ilk. Add to that the fact that smartphones just plain don't support Java in that way and suddenly the browser makes for the only pragmatic cross-platform toolkit for building applications.

Yeah, JS sucks. Yeah, DOM sucks. Objectively, they are terrible. But for as bad as they are, there aren't any good alternatives. And no, building from source is not an alternative.


Building from source is happening anyway. The browser is pretty much a js logic/html+css view compiler. It just JITs it when the page loads instead of creating a downloadable binary.

With node/io and Go, servers have been build-from-source for a while.

Given the snarly mess of frameworks/tools/languages/abstractions used in web dev, it's not obvious you'd get worse performance or inferior code using JIT LLVM and some newly minted update of Visual Basic or Swift with component/object/view caching.


That's not what I meant by "building from source" and you know it.


Huh? I was just pointing out that the distance between building a binary from a mess of source files and building a web app from - er - a mess of source files is not that huge.

And the trend is for it to get less huge.

E.g. Bootstrap already uses the words "custom build" for a rebuild, and js-world has gulp and grunt for "building".

There's already been serious talk about either throwing out the DOM or abstracting it with something closer to a native code model.

Honestly, I'll be surprised if this doesn't happen within the next few years.


And that has nothing to do with users using the software. My point was that you can make even C code "cross platform", with a lot of effort, if you make the user build the program from source. But that is just about the worst possible experience for the user. Building cross platform applications as browser SPAs is easier on both the developer and the user.


> Given the snarly mess of frameworks/tools/languages/abstractions used in web dev, it's not obvious you'd get worse performance or inferior code using JIT LLVM and some newly minted update of Visual Basic or Swift with component/object/view caching.

What is obvious, however, is that more users have a Web browser on whatever device they want to use to use your software than have a virtual machine environment for any language besides JavaScript.


> In that 16 years, we got new (and mostly unnecessary) layers of abstraction on what existed previously.

I don't know about you, but I'd much rather write a simple web service application in python than in assembly.


That's a totally different domain, where abstractions are most likely for the better. The bad part is the abstractions at places where they're not wanted e.g. graphics and other high performance systems programming. Abstractions are fine if you can opt out, but don't take away the control from those who need it.


The same fundamental equation applies to both domains: Software engineering hours are expensive and hardware is cheap.

I just ordered a video card with 24 gig of ram. Well, calling it a video card is not quite right as it's a K80 Nvidia HPC cards, but as a software engineer working on a team of 5 computer vision/R&D guys and two software engineers, I can tell you that I do not want to live in a world where I have to fix code written by researchers that's juggling around vram and managing 4,992 threads. I'd much rather have that abstracted and have a clean API that hides the complexity.

And in terms of performance, you do not have to spend many man-days trying to debug some horrendous pile of code brought on by the complexity of exposing every tweakable bit of a architecture in order to exceed the cost of X addition $5000 HPC cards running less performant but far less complex code.


Is it not the case that many/most games now are some smaller C++ components being scripted by something higher-level?


That was the case in second half of 90's and 00's, today most engines got back to C++ plugins (or plugins in same language as engine itself) and simple declarative logic in game data.

(this has to do with both performance as with various walled gardens that do not allow general purpose scripting)


The Apple App Store has allowed scripts for some time for this exact scenario. I'm not sure which other walled gardens you mean though.


[deleted]


Uh, yes they can, and have been able to for years. What they can't do is download and interpret code.


Why would that be a reason to prefer to implement your entire game in native code?


Needs a (1999) in the title.

For those not in the demoscene, this is written by one of the members of the demogroup Farbrausch:

http://en.wikipedia.org/wiki/Farbrausch

I think this is one of their most impressive demos:

http://theproduct.de/


let's not forget .kkrieger - their 96k procedural FPS game [1]

also, ryg from Farbrausch (who works for RAD Game Tools) blogs regularly [2]. A story about their crazy .kkrieger code sprints at [3] (submitted about a year ago [4])

[1] http://www.pouet.net/prod.php?which=12036

[2] https://fgiesen.wordpress.com/

[3] https://fgiesen.wordpress.com/2012/04/08/metaprogramming-for...

[4] https://news.ycombinator.com/item?id=7739599


My favorite bits from link 3:

  In the menu at the start, cursor-down works,
  but cursor-up doesn’t (he never hit cursor-up
  in menus during the test run).

  The small enemies at the start can hit you, but
  he didn’t get hit by any enemy shots, so in the
  released version of .kkrieger enemy shots deal
  no damage.


96k. Think about that for a sec. Most big name games for PC are ~10GB these days... 100,000x as big as .kkrieger.


Funny how they apparently did go with reusability at the end ;).


Farbrausch made the first demo I ever saw, a 2007 one called debris (fr-041): https://www.youtube.com/watch?v=wqu_IpkOYBg

Super cool, and in an executable of 177kB.


A newer one from them I really liked is Rove:

https://www.youtube.com/watch?v=k_oTQd93eRI


.the .product was amazing at the time, but I feel like the scene has really moved beyond that old demo.


Thanks, I fixed the title.


Hilariously following the links lands you at http://www.theprodukkt.com/, which is the site for a German butcher shop. Avoid the link unless you like giant images of thinly sliced meat.


For the record, that's not German, that's Russian. Moscow, to be specific:

> 111398, г. Москва, ул. Перовская, 20


> Avoid the link unless you like giant images of thinly sliced meat.

Found the vegan :)


I'm not vegan at all, in fact I'm quite the carnivore. The trail of links just leads somewhere unexpected if you're coming from a demoscene page.

Separately, my apologies about mistaking the language and location.


I was really deep into scene many, many years ago, and in terms of PC - 2nd Reality by Future Crew blew my mind. And still does.

http://en.wikipedia.org/wiki/Second_Reality


Yup, if the "Mother of all Demos" moniker wasn't already (rightfully) taken, 2nd reality would be it.


Crystal Dream 2 by Triton, released earlier the same year is on par, I think:

http://en.wikipedia.org/wiki/Crystal_Dream_2

https://www.youtube.com/watch?v=7mWbnVPwX4U


Me too. Maybe it isn't precisely at the top technically, but artistically and musically it just hangs together so well and so accessibly.

A minor but important point in its favour is that it didn't require a Gravis Ultrasound, those things were annoyingly expensive.


My friends and I all upgraded with the ACE version came out, which was meant to be used more as a passthrough - it was great. http://en.wikipedia.org/wiki/Gravis_Ultrasound#UltraSound_AC...

A lot of friends who were devs got free Gravis boards by writing to Gravis directly which is why so many demos and shareware supported it.


Also blew my mind. I could have watched it 100 times. I'm surprised I came across a copy of it so soon after it came out. It must have really made the rounds! Same as DOOM, they both showed up on floppies via an older cool acquaintance of mine.


The Commodore 64 version is double-mind-blowing.


The big lesson from demos is that procedures are the ultimate form of data compression. Minutes of video in 64kb often seems to be off people's radar these days, somewhat unfortunately, because some problems are amenable to the approach.


This is an old attitude. People thought the idea of writing an operating system in anything other than assembly was nuts at first too but eventually the optimization is just not very meaningful.


most of that specific optimisation advice is very far out of date.

the principles remain good though, basically

* know your shit

* measure your work and make sure you really do know your shit

i do constantly cry and bleed inside over the tremendous waste of bountiful computing resources we have today. someone once tried to tell me an xbox 360 was a memory constrained environment... jesus wept.


> someone once tried to tell me an xbox 360 was a memory constrained environment... jesus wept.

Depends on what one is used to and what needs to be done. If you're used to PCs that have at least as much GPU RAM and bandwidth as an entire 360, then sure, a 360 is memory constrained.


I always assumed that "memory constrained" referred specifically to the giant texture assets that modern games used. 2048x2048 textures are - from what I understand - not uncommon in PC game dev. I assume this is the biggest constraint.


If we're talking about "which bottleneck is the biggest" memory size is traditionally the big one on consoles; there's a longstanding preference for a performance profile of small/fast memory, mid-tier CPU, and top-end dedicated graphics, because, as the saying goes, "graphics sell games," and in most games, the kind and quantity of assets in a scene are limited and the design will allow for them to be carefully streamed in as necessary, so it's more important to allow processing headroom. This also accounts for a difference in design style between PC games and their console equivalents, where PC stuff tends to incorporate deeper simulation aspects with more persistent data being tracked, because there's some extra room for that stuff, while console games are forced to be "lean" with most of their memory dedicated towards the assets while the stats and save data are relatively light.

Like everything else, this has changed as we've gotten closer and closer to photorealism and games that are glorified tech demos constitute less and less of the overall market; the current generation consoles have substantially changed their profile to generalize and be more like PCs - the 360 had only 512MB at a time where gaming PCs were going for 1-2GB, while Xbox One and PS4 are roughly in the same ballpark(5GB and 8GB) as current-spec gaming PCs.


You should have included the year in the title, as it wasn't obvious at first.


Found a video of the "perfect drug" demo mentioned, uploaded with notes by the author to Youtube https://www.youtube.com/watch?v=_LHpaGSb3JQ


This mostly just makes me glad I was never part of the demo scene.


Yeah, if I had a nickel for every time I heard someone rant about how stupid everyone is...


The programming advice it provides is good[0], but I'm ultimately struck by how whiney and entitled this sounds. People respond in this manner when they feel like they have some sort of position they have personally gained through their own work that is now being threatened. The arguments often only hide an insecurity that the methods the "traditionalist" used to gain their position are dying out, that a new breed is forming, using new techniques that they don't understand, but that they inherently recognize have the potential to be universally (or near-enough) superior, if given the time to be developed. Thus the need for the immediate, preemptive attack: to cut out the nascent ideology before it has a chance to establish roots.

I think the anti-intellectual aspersions cast towards computer science students is a good indicator of this. The techniques he holds dear are only possible because of computer science. Computer Science is a field of applied mathematics that some people dream will one day be treated like a field of theoretical mathematics, to gain standing and respect in the world of mathematics that traditionally looks down upon the applied fields like CS, statistics, cryptography, etc., in much the same way that "aht" artists look down on illustrators. I think he writes off an entire field of study just because occasionally, specific people--caught in this cultural divide between applied and theoretical mathematics--write some shitty code.

Let's see him come up with IEEE floats on his own. Let's see him conceive of 2's compliment signed integers on his own. Let's see him come up with an assembler on his own. Without prior exposure to the concept.[1] You don't get Dijkstra's path finding algorithm with an ivory-tower attitude that doesn't care about the practicalities of implementation.

It's not like people are or were getting paid to do this. People did it for fun. If you don't like something that people do for fun, just walk away. The reason these things happen, that we see "bad" examples is not because people are getting worse at doing this stuff, it's because new people are finding it easier to get started. Angry diatribes like this are just attacks at new people when they need encouragement and guidance.

[0] Well, to a degree. His field is "clever hacking of X86 assembly". A similarly "clever hacking of dynamic programming languages" exists that necessitates completely different techniques. I'm reminded of the 30-LOC spreadsheet in JavaScript that showed up about a year ago: https://news.ycombinator.com/item?id=6725387

[1] I'm reminded of a joke. Some years in the future, a man goes to God and says, "God, science has progressed so far that we don't need you anymore." God smirks a little and replies, "Oh really?". Man says, "Yeah, and to prove it, I challenge you to a duel. Anything you make, I will make, too." And God answers, "Okay, well, how about a man?" And the man replied, "No problem. We know chemistry and biology and genetics, we can synthesize everything we need to make a man." He bends down and starts to pick up some dirt and some other materials to start making amino acids when God interrupts him, "Hey now, go get your own dirt."


> The programming advice it provides is good[0]

> [0] Well, to a degree.

Well, parts of the high level strokes are anyways. Nowadays his inline assembly block example is likely to just interfere with the optimizer's own code analysis and do more harm than good.

The time spent suggesting trivial rewrites that optimizers are now capable of doing on their own would be better replaced with discussing how the CPU cache or branch prediction works, and it's implications.

Any modern discussion of optimization worth it's salt would mention profiling - entirely lacking here.


That's exactly right. And that was ultimately my point. The industry advanced past this person. I think the vitriol he displayed was because he--at least subconsciously--knew it would, rendering his then-current skill-set largely obsolete.

But that's the nature of our job. We have to constantly be learning new things. If we don't, we end up wasting time screaming at newbies in their sputtering jalopies for "doing things wrong" as they pass us by.


If we do learn new things constantly we risk burning out instead. Choose your poison :-)


That's not what causes burnout.


Let's see him come up with IEEE floats on his own. Let's see him conceive of 2's compliment signed integers on his own. Let's see him come up with an assembler on his own. Without prior exposure to the concept.[1] You don't get Dijkstra's path finding algorithm with an ivory-tower attitude that doesn't care about the practicalities of implementation.

At least as far as the assembler, well, that's something that anybody who is sick of hand-keying in machine code and patching up addresses will probably think of.

As for the rest--well, modern CS folks (nor electrical engineers!) aren't coming up with 754 or 2's complement either.


> At least as far as the assembler, well, that's something that anybody who is sick of hand-keying in machine code and patching up addresses will probably think of.

And then reject, because it's inefficient and hand-keying machine code is more efficient and more hardcore. Or at least, that seems to be the common attitude in the demoscene.


Oh come on, the author was like 19 when he wrote it. Of course it's a little entitled and ranty :)


FYI: from Feb 15 1999




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: