Hacker News new | past | comments | ask | show | jobs | submit login
Letter to John Carmack (phoboslab.org)
149 points by austinhallock on Aug 3, 2012 | hide | past | favorite | 133 comments



I am 100% with Carmack. Sorry, guy. JavaScript is crap. The reasons that JavaScript sucks have been hashed to death in the past. If you are already disagreeing with me, then anything I say here won't change your mind, because you've already heard the arguments before and built up a repertoire of responses. That's fine, whatever floats your boat. Lots of fun games have been, and will be written, in silly languages like JavaScript and ActionScript and whatever. People used to write games in assembly, and they were still fun. In the end, it's the game that matters.

But don't tell John Carmack, or any of the other many people who have been writing simulation engines and 3D rendering engines since around when you were born to use your web browser scripting engine. Seriously. (Also before I was born.)

And unsecured access to the graphics stack is a terrible idea. Flash already randomly causes my GPU to crash.


Do you not see the rhetorical problem with what you're saying? To come in here with a controversial statement and preemptively write off all oppositional viewpoints is to act in exactly the same way you deride pro-JavaScript proponents for allegedly behaving.


Of course javascript is crap...

...however, this is also true of every other part of web programming: HTML and CSS are similarly a mixture of academic navel gazing nonsense and ugly cludges designed by committee.

The fact is that HTML5 is still really only version 0.1 of where web programming should be- But this doesn't mean it isn't improving. The fact is that having portable code snippets that can execute within a website is extremely powerful and right now javascript is essentially the only way to accomplish this.

You don't have to love javascript to love what it allows you to do.


The fact is that HTML5 is still really only version 0.1 of where web programming should be

That's one way of looking at it. Another point of view is that HTML already does far more than it needs to, to serve as a perfectly adequate hypertext format. It could be argued that it's a mistake to try to morph the web-browser into a crappy OS and / or a poor man's UI remoting technology, when we already have perfectly good operating systems and technologies for remoting out UIs or delivering applications to remote locations.

Perhaps instead of trying to make the browser a Swiss-Army-Do-Everything-Polymorphic-FrankensteinsMonster-OperatingSystem-XServer-ApplicationRuntime, we should just use the browser to browser hypertext, and hand off handling of these other issues to purpose specific software that was designed to do that.


we already have triple figures of good operating systems and technologies for remoting out UIs

We only have one web stack, it may differ a little at the bleeding edge, but that the core its the same stack deliverable to pretty much every user and device in the world.

(disclaimer, I work on aforementioned Firefox OS :)


> The reasons that JavaScript sucks have been hashed to death in the past. If you are already disagreeing with me, then anything I say here won't change your mind, because you've already heard the arguments before and built up a repertoire of responses.

You're implying I'm closed-minded because I have an opinion. I would actually like to hear your reasons for saying JS is crap/silly.


It seems to me like you missed the point of his letter to Carmack entirely. He's saying it doesn't have to be the greatest tool for the job, he's saying it's a good intro for would-be programmers and dabblers.

And don't give me that "when you were born" line. It's built to hinder fresh, new ideas and perspectives.

Signed,

A dinosaur who tries to refrain from using the "When I was your age..." line.


> He's saying it doesn't have to be the greatest tool for the job, he's saying it's a good intro for would-be programmers and dabblers.

Perhaps. But we should be able to do better. Pygame, for example -- The example I found[1] looks quite nice and easy to use:

[1]http://www.willmcgugan.com/blog/tech/2007/6/4/opengl-sample-...


Pygame can be a pain to install sometimes. Things that package dependencies into a single binary (like Love2D) are easier to setup, but often lack the out-of-the-box interactivity that JavaScript tools provide.


Pygame is effectively just SDL.

If you are going to depend on OpenGL anyway, Pyglet is even nicer for newbies


Javascript is the new BASIC then? If so, it is better.


I've been writing "writing simulation engines and 3D rendering engines since around when you were born" and I love JavaScript.

Like most C/C++/ASM programmers I hated it at first and I'm under no delusion that it's going to replace C/C++/ASM any time soon. But like many say, "the right tool for the job". By any measure Lua is complete shit and yet Carmack made it popular by using as the glue for most of his games.

JavaScript does many things extremely elegantly and the environment it runs in is fun, cross platform and accessible. It's fun to make things in JavaScript the same way it was fun to make programs on my Atari 800, Commodore 64. You type something and click "refresh" and it's up. No downloading a 2-4gig dev environment. No finding 4 SDKs you need to install. No figuring out how to get your image libraries to link. No worrying about how to get the initial screen up,

And even better, when it works you just send you friends a link. No installing. No worries if they are on OSX and you are on Windows or Linux.

Are you going to write Shadow of the Colossus in JavaScript? Probably not. But ICO? Yea, no problem. Any Zelda to date? Sure. 80-90% of all games ever shipped would run just fine in JavaScript at this point in time. Very few games actually need every once of perf.

> And unsecured access to the graphics stack is a terrible idea.

WebGL does not provide unsecured access to the graphics stack so I'm not sure what this BS remark was about.


> "By any measure Lua is complete shit and yet Carmack made it popular by using as the glue for most of his games."

This just isn't the case. In recent games iD software has moved away from using scripting languages at all. Later in the same talk, Carmack praises iOS for reversing the trend against Java and bringing back native languages. Saying that native languages are the only way you can get predictable performance for things like smooth scrolling.


Great response. I assumed the unseecured thing is in reference to the security issues in webgl which are presumably unintended.


> By any measure Lua is complete shit ...

Reasons? Measures?


> And unsecured access to the graphics stack is a terrible idea.

No one gives you unsecured access to the graphics stack. WebGL shaders are parsed and modified for security (for example, inserting things like bounds checks).

(Of course, there are security risks with any new technology.)


That's the problem. We aren't even sure what attacks exist against the graphics stack, let alone how to secure it. There is a mountain of "who would ever call this maliciously?" code sitting under webgl.

I have similar thoughts about allowing web sites to install fonts. Font rendering is hard, and font engines have not traditionally been exposed to attack. The idea that we have found all the bugs in them is absurd. There were security problems found in png many years after release, and that was something people knew needed to be secure from the beginning.


> That's the problem. We aren't even sure what attacks exist against the graphics stack

That could be an argument against new JavaScript JITs, too, or new video codecs. But we add those to browsers all the time, because they are important. So is being able to render 3D, I would argue.

Furthermore, of course we have an idea of the risks here, and the ways to prevent them. A huge amount of effort has gone into that, but in speccing WebGL and in implementing it, and in collaboration with GL driver vendors. And security experts have been poking at GL and WebGL for a while, just like they poke at JS engines and video codecs.


Yes, it is an argument against JITs. For a little while, OS builders were making progress on mitigation techniques, then the browsers all got together and decided it would be cool to allow attackers more control over the executable parts of the address space.


Without JITs, you greatly limit the languages you can run quickly (no JavaScript, no C#, no Java, no Lua, etc. etc.) - that's a lot to give up.

3D is also a lot to give up.

I understand if you happen to not care about speed or 3D, and that's fine, but most people do.


I do care about speed and 3D. But I don't think the web needs to be the delivery mechanism for all software. 900 years ago, they compiled Thunderbird once and everybody downloaded and used it. Now, every individual end user compiles gmail on a daily basis. The SaaS model has a lot of advantages. It has disadvantages too. I'm ok with only a subset of all potential programs being viable in a SaaS model.

In particular, I think allowing malicious people to run programs on your computer by default because, hey it's safe in the sandbox, is a terrible idea.


> I do care about speed and 3D. But I don't think the web needs to be the delivery mechanism for all software.

Fair enough, I agree but put the line in a different place.


I didn't know that. Thanks. But I'm not so much worried about bounds checks as I am other obscure forms of overflow or corruption in graphics drivers. They barely work for things that are built specifically for them.


Bounds checks were just an example.

Another example: When you get bufferData in WebGL, it doesn't just call glBufferData with those arguments. It carefully checks those arguments for validity, including against the current state (currently bound buffer, its size, etc.). This ensures that the GL driver only sees calls with valid, reasonable info. It does incur a performance hit, but as you can see from WebGL demos on the web, it isn't terrible.

GL drivers in general are pretty complex. But the very small subset of GL that WebGL uses - both in terms of the commands used, and the content it passes in those commands as just mentioned - is much simpler to secure. Browser vendors have been working with GL driver makers on safety for a while now to improve things further.

Again, are there security risks? Sure, any new technology has them. JavaScript engines and video codecs have them, exploits for those are discovered all the time, but without those our browsers would be much less useful. The same is true for WebGL.


Thanks again, I had no idea that browsers performed that much analysis. It's been a while since I've read up on WebGL.


To be fair, some user-agents, such as the Blackberry PlayBook, ask for permission from the user to use WebGL when a site wants it.


That only transfers the decision to run it, doesn't make it any more safe. You can override anything, that doesn't "clean" it or make beneficial by default.


Yeah, no one is "telling" John Carmack et al. to do anything. Unless you consider making an argument to be ordering folks around, in which case may I suggest you listen to this guy John who recently talked about JavaScript...


Don't hate JavaScript. It's our generation's Apple II.


Since the Apple II was all in all just very basic easily programmable hardware for home users, I say no.

Our generation's Apple II is the ARM or any ARM based development board. It's ubiquitous and it's (at least compared to the x86 architecture) rather easy to program yet very powerful. Those are the reasons why the Apple II bred good programmers like almost no other machine in it's day.

Comparing a high-level language with tons of abstractions to Applesoft BASIC or pure 6502 Assembler is rather appalling to me. The reason why people who coded for the Apple II and similar machines became such great programmers is because they got a fundamental understanding of the hardware from it, which is something that JavaScript will never give you. It's not just that tinkering with low level stuff opens up new ways to be creative with the technology, but it does in fact make you a better programmer on all levels.

Additionally, JavaScript is (in my opinion obviously) an absolutely terrible language, with which we're just stuck with because it happens to be the only one that's well supported by all the big browsers of today.


ARM is cool, but you cannot give a kid ARM based computer, and expect it to program anything without setting up programming einvironment first. Do you expect this kid to cross compile on PC? Or maybe it should set up gcc on this device?

Maybe raspbery pi will change that, but for now the kid won't even know how to compile anything. The best bet is probably python, which isn't much better in "closeness to metal" than javascript.


I'm a bit too young to know how well the Apple II was documented for programming. However, people like Jordan Mechner (creator of the original Prince of Persia) were spitting out 6502 assembler code at a rather young age, weren't they? And it's not like that's a trivial task. Mind you, there was no internet as we know it today with StackOverflow and other resources, in fact, the internet was still pretty much in it's infancy.

I agree there's a lack of ways to develop directly on ARM, but that should change pretty soon with the Raspberry Pi as you point out, or any other such platform. For the rest of us there are development boards which usually come with quite a bit of documentation on their usage. Anybody determined enough to learn that stuff will be capable to do so, I think.

Python again is more like a magic wand that does most things for you. When I learned coding I constantly kept wondering how those calls actually worked and what they did. I just couldn't accept that say, print just prints stuff to the screen. I was wondering how it did that. I finally gave up on questioning at the edge of the ALU.

The reason I told this is, that I think a high level language like Python will teach you a lot about how to structure problems into smaller parts and will improve your "algorithmic thinking", but you'll still be pretty clueless about what's actually happening behind the code.

And exactly that sort of hacking on a lower level is what comes to my mind when I think of computers like the Apple II, which were basically just glorified calculators by today's standards.


I had C64. I "programmed" in BASIC. Which means I typed in programs from manual (which was in German, and I couldn't understand it), and changed stuff to see what will be the effect. I've never went into 6502 assembler (I had no books for it), but I learned that I want to be a programmer, and that was what was important. When I got PC I tried QBasic, Turbo Pascal, Assembler, C, Delphi, and everything else. Distance from metal was never issue for me. The issue was the idea I had, and the end product I wanted to achieve. Hell, I've written small logic game with Turbo Basic, that was one huge for loop (with counter that went to 99999999 or sth like that, because I didn't knew of loops without countter) with 5 screens of nested if then elses inside. It was about guessing words letter by letter, and I didn't knew about arrays, so I had 5 variables A1$, a2$, a3$, a4$ and a5$ for letters.

It was horrible, but I didn't knew it. And it worked. I've shown it in school, and to my parents, and I was a wizard. The methodology, and style conventions came later.

I think of 8bit computers as gateway drugs that invited people to become programmers. Currently the closest things we have is javascript+canvas, or some scripting languages in game modding scene.

Nobody[1] really writes assembler today proffesionaly anyway. It is too hairy now, and we have better compilers. Don't push it on kids.

[1] by nobody I mean almost nobody, and yes, arm assembler is better, but still show me a big application written in it


Sadly, I'm not too young to know. The Apple II had tons of documentation, including its schematics, for christ's sake, and a built-in disassembler so you could inspect anything in the ROMs. It came with two versions of BASIC -- Woz wrote an integer-only BASIC, and a licensed Microsoft BASIC. Over the years I had one, I also wrote programs in Pascal, C (with a Z-80 card), Forth, and 6502 assembler. Plus, you could wire crap directly into the game port's A/D converters; it was our generation's Arduino. So yeah, it was a pretty great box for developers.

> There was no internet as we know it today

It was the age of BBSes. I spent thousands of hours trolling random BBSes for bits of knowledge. There also were Compuserve, the Source, and Delphi -- today you'd think of them as AOL-like walled gardens. (In the early days AOL was considered just a Compuserve clone with a GUI front end.)

> 6502 assembler... not like that's a trivial task

I wrote an Apple ][ game in 6502 assembler for Compute! Magazine when I was 15. (Yes, back then, software was distributed in magazines... people would type the code in...) I urge you to find an online Apple ][ simulator and try it. It's crazy and fun. But yes, it's like going from the big lego blocks to the tiny lego blocks. You say "Python will teach you a lot about how to structure problems into smaller parts" -- writing a simple game in 6502 assembler will teach you (force you) to structure your code into even smaller parts.

I personally like bigger lego blocks and the higher-level abstractions we have today, I think it makes me ridiculously more productive and now that's what I crave (productivity = more time spent with my kids) whereas when I was younger I loved digging into the internals just because they were there.

> Apple II... glorified calculators...

Awww hell. Now it's on.


> Awww hell. Now it's on In a sense all computers are "just calculators". That statement wasn't meant to mock the Apple II in any way. After all, modern computers can't do "more" in terms of computability. I was mainly refering to how instruction sets were smaller back in the day. Looking at the instruction set of a x86 CPU really makes me kinda dizzy sometimes. The 6502 on the other hand has some 56 instructions (if my quick googling proved correct).

I do like the bigger lego blocks as well. They're great for doing any actual work. I'm also not trying to force assembler onto anyone. But as you point out, digging into the internals is a whole lot of fun and a very interesting journey through the world of computing. It kind of feels like discovering the very essence of it in a way. As a bonus point it teaches you a whole lot of things that might be useful one day.

Bottom line, even with today's powerful languages it's never wrong to dig a bit deeper, if just for experience and insights.


...a built-in disassembler so you could inspect anything in the ROMs.

Which you didn't need to do because it also came with commented ROM listings.

Utterly amazing how far Apple has come (or gone) in the intervening 30+ years...


Try Arduino. It's ridiculously easy to get a project going.


"Closeness to metal" isn't really the issue.


The problem with javascript is similar to the problem with BASIC.

The wise elders back then tried to warn us that if you learned to program with BASIC it would take years (if ever) to unlearn those bad habits, but we ignored them. The result is thousands upon thousands of programmers who produce nothing but spaghetti code.

Now we have an entire generation of programmers for whom statically typed languages, and compiled languages in general, are completely alien. There is a time and a place for dynamic typing (just as there is a time and place for GOTO) but to start with a language like javascipt inculcates habits that will take years to undue.


I really thought where this comment was going was

>> The wise elders back then tried to warn us that if you learned to program with BASIC it would take years (if ever) to unlearn those bad habits, but we ignored them. The result is thousands upon thousands of programmers, and the creation of every interesting company from google to iD. That's not a bug, it's a feature. >>

weird...


Unfortunately there's a gap between popular dynamic scripting languages - PHP, Python, Ruby - and popular static compiled languages - C++, Java, C# - where some imperative, statically and strongly typed, garbage collected and native-compiled language should exist. For a first language, i think it shouldn't force any particular paradigm on you, nor require you to understand things at the C-level of abstraction.

Legacy VB almost fills this gap, but unfortunately the syntax is a little embarassing. Actionscript is a perhaps closer.


It is not. It is closer to our generations HyperCard (but not nearly as cool).

The iPad COULD have been our generation's Apple II if it came bundled with an un-sandboxed Xcode iOS app and Terminal... dreams of an alternate universe.


And? He's not ordering anyone around, he's urging him to consider others may find it useful. There's a world of difference between that and a direct order. Sorry you can't see that!


The post has absolutely no mention about the quality of JavaScript. It is arguing that Web technologies are 'good enough' for an extremely large class of problems, and that they have massive advantages due to a lowe barrier to entry and ubiquity.

You might want to argue those points, but this is a complete strawman.


He's not telling Carmack to use a web browser scripting engine. In fact he specifically says "Nobody pretends that the next AAA-title will be written in JavaScript."


I didn't watch the keynote so I don't know exactly what was said but, Carmack is in the business of trying to make AAA titles.

I wonder in what context Carmack's comments are in.


> JavaScript is crap

You don't have to write JS though, just use something compiles to it.

> And unsecured access to the graphics stack is a terrible idea. Flash already randomly causes my GPU to crash

Letting only safe operations through to the driver is the biggest part of a WebGL implementation's job. Hopefully HTML 5, WebGL, WebRTC etc can replace Flash. I'd much rather trust those jobs to Chrome than to Flash (Flash provides accelerated 3d too these days btw.)

There's lot to be done still, and you can score bug bounties from Mozilla and Google if you manage to poke holes in Chrome/Firefox :)


I'm pretty sure the compiled graphics code is sandboxed making an actual crash (theoretically) impossible.


With WebGL shaders? Rather unlikely.

What we can expect is hardened shader compilers in the graphics drivers. In 2018 or so.


Talented people like Carmack have the potential to make things less crap, if they took an interest. If they don't take an interest, things will remain the way they are for a lot longer. John Carmack should join the Chrome team.


One person, even Carmack, is no match for standards bodies like the W3C. Google itself is already trying to push different web technologies like Dart and WebM, but it's a very slow burn. I wouldn't say there is a lack of vision or new ideas.


>If you are already disagreeing with me, then anything I say here won't change your mind, because you've already heard the arguments before and built up a repertoire of responses.

Really? Ever stopped to think it's you that is biased, so much that you pre-emptively null any possible responses with the above cheap trick?


I don't have enough knowledge of native graphics libraries, nor of WebGL, to speak to the core issue in this article. But, as a young'n, the last line really strikes a chord with me.

I will never experience the joy people had fiddling around with their Apple II's and I still don't know the first thing about the Commodore 64 besides the name. My attitude toward the "How will the kids ever learn unless they can tinker with it?" nostalgia that comes up on HN has always been: "Meh. No one I work with had an Apple II, and we all still ended up as tech-types."

That said, the first computer I owned myself ran Windows 98, and the first thing I learned to do with it was noodle around in HTML and JavaScript. Well, Okay. It was second thing I learned to do, after installing Sim City 2000 and wasting a week's worth of time.

It was very poignant to realize all at once how the internet provided a chance to be part of an easily-accessible ecosystem, one where you could tell a computer to do something and it just did it. Even if it was in a browser.


Buy an old computer from your local classifieds site/paper; it'll cost you $15 or something including working disks, mags and books. And start coding on it (and hardware tinkering if you like that); it's like a Bonzai tree; it makes a brilliant and relaxing hobby. If you do this a few hours / week, you'll get that serene feeling (at least I do :) of something which is very fulfilling without the stress of HAVING to do it (Javascript is also a job of many and that gives the mixes/stressy feelings).

In a year or so you will understand fully a) what we are on about b) how very cheap and simple things can be very fulfilling c) why 'older folks' sometimes sigh when yet a faster CPU is eaten 100% by Windows while not significant seems to have changed d) how your computer works internally ; if you are interested on the digital pulse and IC level (I can extend my MSX computers using 74series logic ICs, which is, again very fulfilling) e) how you can do stuff efficiently in very little memory on very slow computers.

That e) point is something which might not seem valuable, but it is and it still is; if you know how to do things on these computers, you understand how computers work and why some assembly code is much slower than other. Although computers are vastly more complex these days, you can extrapolate quite a bit on high level and it will be easier to read current tech documentation about hardware (and software) as well.

Anyway; I would recommend everyone to at least try this as I strongly believe it will help current generation understand better. It's also better for your children (and children of others you might know) to let them tinker and learn this young instead of playing XBOX 360 games (that's my opinion but I believe it sticks). Maybe their friends won't be as impressed with them trying to implement a ray caster on a 3.58 mhz machine, but when they get to 12 they can program the Next Great App and win science fairs while the others, well, can play Skyrim.


I see there is a lot of nostalgia in that post that I (and others like me) will probably never understand. But, I still don't see the point.

It's nice that we could buy and old machine and program something on it. But at the same time we could do it in Javascript, on the web with a very pretty interface. So why should we trade all the knowledge, the tools and the languages that we (well, people like you how tend to get all nostalgic over this topic) have built and and write something on an archaic system?

I never had such an old system, yet at the same time, I understand the constraints more or less, I now a bit of assembler, C and still question where this knowledge is really benefiting me. (Though I do find it very interesting!)

If I want to have a constraint environment, I could join a JavaScript 1k challenge and also work with artificial constraints (at the same time I could still enjoy the modern tools, environments and even graphics).

Maybe I just really don't see the point.

And to your last point, I think a better approach would be to show our children either PyGame or even modding tools for modern games. I just don't think a young child would really be that interested in the archaic inner workings of a slow machine, but could be really interested in making mods. I'm not saying there are no such children around (I'm sure there are), I'm just questioning the approach here.


I don't think you see the point :)

The point is that there is, in essence, no point. That's the same as the point of working on a Bonzai tree. It has no point except that you can come to inner/outer peace and beauty.

If you read carefully you see that actually I see 'few hours per week', so there is no mentioning of 'trade all the knowledge'; you are not going to work on ancient stuff fulltime.

For children you might be right; I just know what I was like and what the kids I hung out with were like; I grew up in the 80ties and significant parts of that I spent disassembling, soldering, recreating and such of OLD (50-60s) radio's. Because they are EASY to understand and master.

My issue with PyGame for children (versus for instance an Arduino kid, Rasberry PI or Xgamestation or, much cheaper and better documented, an ancient computer) is that I have seen many kids growing up like that (replace PyGame with VB or HTML) and they don't have A CLUE how a computer works. And when they try to learn that, it is hard to make that step from this, basically, blackbox system to how it actually works. You got that, but many don't.

But yes, I'm biased, I just know quite a few people who followed me and are happy with it; I just summed up stuff I/we get from that. I'm probably just crazy :) And I do know you can do this in JS too but people just don't because their computeres are powerful enough to do it with a ton of fancy libs and tools. In my experience it ends up people (and yes there are exceptions; you are probably one of them) just being lazy.


The point is that there is, in essence, no point. That's the same as the point of working on a Bonzai tree.

Well, if there is truthfully no point, you could replace working on an ancient computer with actually working on a Bonzai tree, or a rock garden, etc.

But I would assume you meant more "There is no point, aside from gaining an appreciate for how machines worked in an older, more basic form" heh.


I think you can replace it with that (I used to do other stuff for relaxation and focus change, I just now like this more), but i think you'll get more out of an old computer than a rock garden :)


I think the point is that there's very real value in understanding how the machine works at a low level. While it might seem like arcana, and while you might think that you understand the constraints more or less and be aware of how the machine is programmed, the value comes from actually asking the hardware questions using those methods.

Unless you're very lucky, at some point in your career your high-level development is going to get constrained by some very low-level fundamentals. Knowing what's going on in the machine is going to be key to working your way through it.

Not to mention the fact that you'll have the ingrained mentality to always think about the performance and bottlenecks in your code and systems, even if you're highly unlikely to ever hit practical limits.

Knowing that stuff -- more importantly being an experienced practitioner -- just makes you a better programmer overall, and makes you more sympathetic to the hardware that has to execute your code. It's a dying skill and it's very far from being nostalgia when it can bite you in the real world very easily.

That's the point.


I don't fully understand the parent's point either, but I think there is value in learning and using different systems, even as a hobby. As you agree with, programming a low level system can't be done with the same tools as you are used to, you won't have the same convenience, and you might have to do things differently, even think differently about otherwise common problems. There seems to be plenty of learnings in there.


The closest I've gotten my oldest son to be interested in programming was a few weeks ago, when I did the classic

    10 PRINT "Hello! ";
    20 GOTO 10
on the AppleWin emulator I installed on his laptop. It kept him interested for a few hours, but after that he wandered away to play Minecraft instead.

There's so much joy inside that "]" prompt but I'm not sure how to get him to see it.


That little program is what got me interested in programming as a kid.

I played games nonstop on my (parents') Apple II, and loved it. But seeing those two lines of code produce "Hello!" over and over on the screen blew my mind.

You see, I thought games were amazing because I could manipulate these little worlds, and I could doing anything imaginable within their rulesets. But, seeing "Hello!" scroll forever and ever made me realize that, with this coding thing, there were no rules. The fact that I could make this computer do whatever I wanted, if only I could speak its language, was irresistible.

So of course, I modified the program to say "Hello sray!" and "Hello <this>" and "Hello <that>" and "Sray's brother smells", and so on and so forth. And then I figured out how to add spaces to each line to make a cool zig-zag effect. And so on and so forth... most of us know how it goes.

Anyway, this little trip down nostalgia lane isn't going to help you get your son interested in coding. But it's interesting how different kids react to the same thing.


It was better back when you could totally freak your teacher out simply by running a simple program like that in class after you finished the assignment before anyone else.


Well, I was allowed to play 3rd party games 1 hour per day max; the rest of the time I was allowed behind my computer but not playing games. So what else to do with that great machine than make games myself? It worked well for me and still does.


You could always install DOSBOX and GW-BASIC. Then you can play around with the PLAY statement (feed it a string of notes), the SOUND statement (feed it frequencies) and the DRAW/LINE/PRESET/COLOR/PALETTE statements (draw coloured shapes on the screen).


I remember, at age 8 or 9, being so proud of myself for having developed an app that played a couple of complete songs using SOUND in tandy basic on our first PC (an 8088 w/ a whopping huge 10MB hard drive).

My father (an amateur singer), used the "Wind beneath My Wings" version for to warm up before practice.


Me: PLAY "l16ecececececececececececececececececececececececec"

My dad: * runs out to answer the "telephone" *

Me: * trollface


When I was 10 I started reading a VB book and was inordinately proud of myself for learning how to customize the toolbars in the IDE. I just assumed that that was a major part of programming. I also misunderstood the part on autocompletion and thought it meant most of your code was autogenerated.

I wasn't the most tech-literate child, but at least I've come a decent way since then.


Damn I feel old. I was using GW-BASIC when I was 8 :-(


I learnt to program with GW-BASIC when I was 13 and, as you say, it had all these high level built in functions that made easy getting results.

Most of the stuff I programmed when I was 13 was crap, but it was amazingly funny!

Today I would recommend Python to start tinkering, but to be honest I think it was easier back then. My PC had 512KB of RAM (it doesn't matter, GW-BASIC allowed only 64KB for both your code and your data), CGA graphic card (4 colors), 8MHz CPU and no HDD disk; and even with all these limitations, I think it was easier than Python + PyGame.


> We're there again: take 3 lines of JavaScript and you're drawing an image to a canvas element. Take 20 more lines and you have a rendering loop and a sprite that moves with the arrow keys.

Processing (Java), PyGame (Python), and I'm sure many other libraries can do the same thing. They're also faster, cleaner, more robust, and you're not writing some insane garbled blend of JavaScript and GLSL with all the associated scaffolding code between the two vastly differently typed languages.

I would highly recommend Processing for any graphics playing that you might do. Yes, you have to write a bit more code (it is Java, after all), but it's also much more rewarding and just as portable as something written for WebGL.


Those libraries/languages don't have the user install base of javascript, which is nearly every single piece of computing that has been released in the last 10 years, albeit most of those older models probably won't be able to run the javascript that's running now.

And there is no "click download" required like those two libraries would need and also no need for patch updates. Your "download" is just pointing to a URL.


Yes, thank you. I know how browsers work.


So you just momentarily forgot it when writing your previous comment?


Not to knock Processing, but I don't see how it can be described as 'just as portable' when a WebGL-capable interpreter is already installed on millions of machines worldwide, with zero knowledge required to use it?


I find it amusing that you mention Pygame, because it's a wreck with horrible performance due to software rendering, it has bad tutorials, inadequate documentation, and last time I checked had 32-bit/64-bit issues.

Source: Personal experience porting a multiplayer 2D game

Edit: By contrast, JavaScript (well, CoffeeScript) and the DOM was a much nicer experience.


Not only that, but Pygame only runs on CPython, which is about 10X slower than the JavaScript engines in FF or Chrome.


With Web/Local Storage you are limited to 5-10 Megabytes of data.

There will be a shitstorm once graphic drivers get exploited through WebGL.

The game the writer self-promotes is a completely different genre than what id Software does (2D shmup versus a 3D FPS with detailed graphics). It eats a whole CPU core for a seemingly trivial game (might just be missing sleeping in the code, I have no clue).

If you use a good codebase, you do not need to write 200 lines just to setup your screen, eg http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenG... .

And never forget that web-based games are kinda non-free unless you can download them and play them locally. You are always at the whim of the developer/hoster/provider. I much prefer games that run natively, disconnected from the internet (attack vector) and whenever I want.


The web platform is rapidly improving for games, on all fronts.

The HTML5 offline (appcache) spec allows you to have resources available when offline (and you don't need to write your own code to stuff things into localstorage...) Besides, localstorage is not considered, by browser hackers, to be the best idea performance-wise. They recommend IndexedDB.

Those specifics aside, my main point is to say that game makers are targeting the browser and browser vendors are very much aware of the pain points they have and are working to correct them. Browsers today evolve much faster than they did a few years ago.

(obdisclaimer: I work for Mozilla)


> With Web/Local Storage you are limited to 5-10 Megabytes of data.

IndexedDB has unlimited storage.


There is also a (proposed) file API. http://www.w3.org/TR/FileAPI/


You only need localStorage for saving game or preferences.

Static assets (like models, textures, most of the level) should be, well, static, and can be as huge, as you like them.

Also you can write html5 game as one-page website without server part, and downloading it to play it offline is as simple as file->save complete web page as.. in your browser. You can provide installer for your players that just unpacks the game to directory and places shortcut in start menu (no dll-s, no register settings), you can pack your game into chrome app format for some users and place your game in chrome app store.

One thing I think chrome gets wrong is its security policy that by default prevents local web apps (when you open html file on your disk in browser directly) from doing ajax requests on the same directory. That means you need to wrap static assets in html, and that's stupid, I should be able to load xml from the directory my index.html is stored in. But whatever, there's workaround.

I think most games don't really need super detailed 3d graphic, and simplicity of distribution (nothing to install, just click link and play) is the killer feature of html5 for simple casual games.


>We're there again: take 3 lines of JavaScript and you're drawing an image[...]

I think Carmack wasn't referring to how more abstractions could make his life easier. I've never had the time to dig deeply into id Software's sources (which get open sourced after a couple of years, in case someone didn't know), but what I've often heard him complain about were the abstractions themselves! This is basically about how we wrap a couple of layers around the drivers these days - drivers which have become increasingly complex.

See, abstractions are a good thing. General purpose abstractions might be a good thing for a lot of applications. For some, like graphics engines, they're not. Suppose you're working on a cutting edge 3D engine and you want to squeeze the last bit of performance from the machine. What you end up doing is talking to the graphics card as directly as possible, with the least amount of "layers of crap", as Carmack called them, inbetween you and the hardware.

General-purpose abstractions are loaded with stuff you don't necessarily need and come with quite a bit of overhead. Suppose you have 10 layers between you and the hardware, then everything you try to tell the hardware has to go through all those layers and possibly back to you. That's quite a drawback.

Since drivers have become rather complex themselves as GFX hardware has become more powerful in the last 2 decades you need quite a lot of code to talk to it properly.

The same thing goes for setting up the screen for rendering. Windows wants you to be quite verbose about what you're going to do (obviously). That's why you need 200 lines of code to do so.

The only way round that is using libraries that abstract those things away from you, but they do so at additional cost. Obviously it's possible to do the same thing in just one line of <insert your favourite language here> in the same way that it's possible to do the same thing in just a couple of lines of low level code as long as you have a library doing it for you.

However, as pointed out, in an AAA game engine, that's not what you want to do. There's a reason why the "highest common denominator" is usually D3D here. It's because D3D is still reasonably quick (although not as quick as OpenGL) and does enough things for you.


> It's because D3D is still reasonably quick (although not as quick as OpenGL)

While I believe that OpenGL is better for everyone if only because it's an open standard, I think that one article noting a difference between the two APIs should not lead to the conclusion that "OpenGL is faster than D3D, full stop".


Your reasoning is definately correct. I haven't made enough measurements myself to justify the statement, but the couple of times I did, OpenGL was usually a tad quicker. However that's more anecdotal evidence than anything else.


I totally agree with Carmack. Why the hell crap technologies dominate all the time? HTML5 and JavaScript is the most disgusting thing I've experienced in my 20 years game programming history.

"Nobody pretends that the next AAA-title will be written in JavaScript. The community understands that it's magnitudes slower than native code"

Obviously no. Most web standard adbocates insist that all software will and should be web based. They must be criticized.


> Why the hell crap technologies dominate all the time?

It's because you're not looking at the bigger picture. The technology doesn't matter. It's all about the players.

Most people don't want to install something to "test out your new game" with. Even Steam is a higher barrier to entry than a web browser--and that's just a single download. Give players minimum resistance to play your game and you'll have more players.

Sharing is also huge. With browsers, it's trivial. Other systems, you need more context. More instructions. More work for the players who just want to share their experience. Or even better, for developers. "Hey, check out my new game: http://my.new/game. It's buggy, but what do you think about the controls so far?" Zero friction to sharing. Zero friction to playing.

Portability is also big. But I think in cases like JavaScript, that's more a consequence of not having to install anything.


Just like with C++. That language is a horrible mostruosity, yet projects keep migrating to it, e.g., Id's game engines, GCC, etc.


Things can be horrible in many different ways. C++ is definitely horrible in some ways, but when used properly, it allows you to do stuff you cannot do with most 'less horrible' languages (if any). Going C++ is a very valid tradeoff for many purposes, the only reason people are looking down on it now is because the spec is a little messy, and because it is probably one of the most difficult programming languages to really master.

JavaScript on the other hand really is horrible in every way imaginable except ubiquity. Personally, I think it's a really bad thing people are trying so hard to point out 'the good parts' of JavaScript, because it devalues all those other great languages that share the same 'good parts' without the rest of the lunacy that is JavaScript. I'm thinking about languages such as Lua, Go, Erlang, Clojure, etc.

There really is only a single excuse for using JavaScript, which is when you are writing web applications, and only because it's (sadly) the only option you have if that's your game. JavaScript is like the QWERTY keyboard layout, back in the day it was designed it was ok for the purpose, and now we're stuck with it because everyone standardized on it.


Projects keep migrating/being built in it because it offers things other languages don't (for example being relatively close to the metal while still being Object Oriented) and it's available.

On the other side you have web people who hate javascript or would like a solution more suited to their needs and can barely do anything about it since it's the only browser language. I find C++ just as messy as anyone else but its context is so different from javascript's that it doesn't even make sense to compare them.


op also completely misses the point. we're even further from the 4 lines than we were ever before.

sure john could just use someones library. but in the terms john was speaking, javascript + library + browser stack etc. etc. is not 4 lines it's hundreds of thousands.

there was only one prof at university i enjoyed. he would stand in the cpu design class and mock the java world and how things get slower despite us having faster cpus.

ps. i earn my money with web work

pps. javascript is a horrible language. holding my breath for dart and anything else thats not javascript


Yes, we're moving further and further away from the metal, but that's not the point Carmack was making when he said that getting started and pushing pixels to the screen was easier on the Apple II than it is on Windows/Linux. He didn't mean the actual machine code that is executed, but the code you have to write/understand.


And he's still correct. It's still simpler/faster when you are allowed to to just poke bytes in memory-mapped video ram ( see http://digitalerr0r.wordpress.com/2011/04/30/commodore-64-pr... for an example on the C64 ).

The HTML scaffolding and JS for even a 2D canvas blit is still longer to type, and is slow.


That is in no way simpler, only faster. I have no experience with writing for C64, but to me at a quick glance it makes little sense just skimming over that.

Now take a language like RoR which I have never used. I can easily look at it and atleast kind of see what is going on.

The abstractions we have now atleast allow for code that can read like a book, and that is a lot less intimidating for someone trying to get started with programming.


"I have no experience with writing for C64" ^^^^^^^^ ???

it is simpler. it's not as intuitive, because youre putting it in relation to your current experiences. your current experiences make it easy for you to understand ror. but now imagine you wanna change something in the actual ror stack. how simple does that become? not that simple anymore is it?

if you come from a different background and you look at functional programming languages you might say wow that's difficult. but similarly a guy that's only written functional code and looks at ror might say wow you're an idiot.

i hope you get the drift. it's just subjectivity youre talking about.


The quoted line was from going back, editing, and even screwing that up. haha

I can see what you're saying. I still feel like intuitive code == simpler, but we'll have to agree to disagree.

This has made me more interested in diving in and learning some lower level stuff since you believe it is more simple. At the very least it'd be a great learning experience.


A wonderful resource in Abrash's Graphics Programming Black Book ( http://www.gamedev.net/page/resources/_/technical/graphics-p... ); this gives you an appreciation for a lot of graphics and low-level issues (how to wrestle with VGA, etc.).

The problem with "intuitive code == simpler" is that it doesn't account for the scaffolding your intuitions are based upon. Like it or hate it, C is pretty straightforward in what it claims to do, and assembly moreso.

Remember that a machine is ultimately the world's stupidest idiot savant following your directions according to the rules of its hardware--any abstractions built atop it to make things "more intuitive" will only serve to shield you from the underlying simplicity of the system.

I'm not going to claim that high-level programming is anything other than a productivity boost, but I will state that for learning the system and writing system-friendly code you really need to be able to operate at a low level. Oftentimes, that low-level is very reasonable and as "intuitive" in its spartan domain as Ruby or something similar.

Unless you're the x86 isntruction set. Fuck that guy.


Thank you for the link, I know what my next read will be!


Simpler/faster is relative and interpretative. I loved playing with machine code on the Commodore Amiga until I found out about assembler. Lacking in interpretation should not set the code reading skill entry level to some fashionable default. That is lazy and arbitrary, and just because someone likes the cozy space something like that creates, doesn't mean there isn't more out there. 90 is a nop. You build from that just as you would when reading about type reference. It is possible and rewarding. The other way is to abstract everything to the point you don't really know what you are doing while being fluent but oblivious to how confined you are.


If you have never used RoR, it's unlikely that you actually do see what is going on. But it's nice not to feel intimidated, I guess.


It would be great if people stopped seeing native and web games as mutually exclusive. I'm a huge fan of native games and all the oomph they can dish out. Of course we're not going to see AAA titles in the browser any time soon and of course native games will ALWAYS have an advantage, at least performance wise, over browser games since they don't have the browser and javascript overheads.

That being said, there are both native and browser games that are great, just like there are some that are mediocre. The browser is imho a great place to prototype a game, make a first game as a newbie or even publish something that's not too resource hungry.

I'd even go as far as saying that WebGL will stimulate the development of native games as well. WebGL based game development is a lot closer to native game development in a way than Flash games are (at least from my limited experience) so new developers might make the switch more easily when their needs exceed what the browser can give them. Exactly the position John Carmack's company is in, although they never had the modern browser at their disposal in the past.

These are exciting times and I suffer a little bit inside whenever I see talented people arguing over them instead of making the best of them.


the arguments here will cause innovation.


Why does it matter whether Carmack likes JavaScript? If he doesn't like it, it won't vanish in a puff of smoke.


I guess it matters because Carmack's opinion carries weight in the industry.


Carmack is one of my heroes but I don't think that's particularly true. He's famous for not liking deep stories and throwing people straight into the game.. yet most AAA titles nowadays have ridiculously elaborate stories and it can take an age to get into gameplay. His opinion is always worth listening to, but I'm not convinced most people follow it.


it won't vanish in a puff of smoke. but i have a feeling it will turn into the new internet explorer 6 of the web at some point


As far as I can see the only true parallel between the Javascript/HTML/Whatever environment and Apple II basic is that it comes pre-installed on every computer. This does NOT mean that it is a good way to teach anyone. It's not. It's pretty horrible, compared to pretty much everything else.

IMHO it's a much bigger revolution in terms of being able to teach your kid from across the globe over skype (or whatever).


A quick aside for the people who say that Javascript stuff pegs their CPU: it's not always going to be that way.

We've got OpenGL now, and people are already writing shaders that do the work of physics and matrix rotations, etc, but OpenCL (with a C) is already popping up (you can get it running on node with a couple of different libraries, and there's a Ruby lib for it as well), which will let us write substantially more "general" code that runs on the gpu.

We were spoiled with traditional gaming; having unfettered access to the CPU, and all of the space we wanted when installing from physical media, is pretty crazy when you think about it.

I think the bigger question isn't "can Javascript be fast enough", because once the GPU is handling the physics and graphics, Javascript will be almost in the position of a traditional 3d engine's scripting language. It'll still have to do more than, say, UnrealScript. The networking code will be in JS, probably the model of the scene graph, etc. On the other hand, it's probably faster than UnrealScript; I know it's faster than TorqueScript.

Space requirements are the real killer, though. Current techniques rely on ever higher-resolution textures, 1024 pixels square or more, including additionally a displacement map (so that the textures appear three-dimensional) generally of equal resolution. These are for character textures -- the environments that they inhabit include literally gigabytes of resources that are streaming in and out of the GPU.

So almost any Modern Warfare game is never going to happen on the browser. We're talking gigabytes of content, as opposed to the megabytes we typically load even on content-heavy sites.

Nonetheless, a mixture of traditional and procedural techniques could get us a lot of the way there. Maybe use up a couple of MB on character textures, which contain difficult-to-generate details, and a few more on level geometry, but generate procedural textures and displacement maps for the environment.

I know it seems like the traditional wisdom is "never gonna happen", but that's only true so long as traditional games are "never gonna" get off the CPU and move most of their code onto the GPU (physics as well as drawing). Once the heavyweight tasks can be offloaded, a new and bewildering world opens up.


In 1998 you couldn't even stream a music video over the internet. In 2012 you can stream the entire game of Quake 2 through your browser in under 5 minutes. If you told me this while I was installing Quake 2 through my CD-ROM and waiting 30 minutes for it to finish, my head would have exploded.

http://playwebgl.com/games/quake-2-webgl/


We're definitely getting there. Still, with storage what it is right now, that 5 minutes becomes your startup time.

But a game built with this environment in mind would likely fare much better. I truly feel that we're just a couple of mobile browser upgrades from widespread accelerated browser 3d.


Well, let's not say "never" going to happen... I'd like to think that _someday_ gigabit internet will be the norm. :)


If you have watched Carmack's previous keynote, he strongly believe in static typing and static analysis, not only features and performance.


Anyone have a link to a video of Carmack's keynote?


Huh? I'm really not the one to pull in the generic argument but - there's a place for both and much more.

It's great not needing write hundreds of lines of boilerplate code to get a 2D physics game prototype running, and conversely, it is nice to get awesome graphical support using directx sdk, much of it handled in there by sane API.

Whatever floats your boat.


Language arguments aside, it's hard not to look at the three.js 3D examples and not be impressed by how far browser 3D rendering has come: http://mrdoob.github.com/three.js/


As an aside, I find that I usually disagree with anyone who feels compelled to frame their argument as an 'open letter'.

In this case, I agree that javascript is ubiquitous, and has the advantage of being delivered to the end-user in (more or less) source code form. However, the language itself is a mess, and it's a bit of a shame that we seem to have gotten stuck with it for so long as the only dynamic web runtime.

I think the most annoying thing about javascript is that people who learn it for web-programming seem to insist on using it everywhere, without regard for whether it's the best tool for the job.


An aside to anyone who does want to just push pixels with native code in 3 lines: http://code.google.com/p/pixeltoaster/

Pixeltoaster is a dead easy to use framebuffer wrapped on top of GL/SDL. It also comes with a timer and keyboard/mouse input, it's pretty much as much fun as you can have in C++.


JavaScript is our generation's Apple II? Look I think Javascript is great but this is just ridiculous. But then I think it's also crazy to throw away all this great work on hardware and software and only innovate for the Web where we are re-solving problems that were solved long ago just so they are "Web based."


> Right-Click -> View Source is what made the web so successful and it's awesome that we now have this for 3D graphics as well.

That can't be serious. I bet he didn't try it on GMail.


It's how I, and many other programmers I've worked with in my age range, got started.


Stage 3 of linked X-fire game on my super-duper 64bit laptop running latest FF is just plain laggy. Just saying.


Dear John..


Javascript doesn't have a monopoly on "you can move a sprite in only a few lines of code". I can do that in C too, libraries aren't new. None of the horribleness of javascript is justified simply because you don't feel like using a higher level library in a sane language.


Hmmm... well games these days come on a DVD with 2 gigs of graphics. How could a JavaScript game streaming over the web do something like that??? PC games with high end graphics and gamers have pretty much driven PC sales/upgrades for the last decade. I really doubt that gaming enthusiasts are going to be happy with their FPS going into the toilet and loading times taking 5 minutes to download all the textures.

Are we going backwards in terms of technology?

Lets make a distinction here. Javascript is fine for phone games and puzzle games. That is about it. Same reasons why embedded chips are STILL programmed using 35 year old assembly/C and always will be. It is the right tool for the job that wins.

Hype can be a very bad thing for technology. I realize that web developers are creating quite a clamor with their aspirations of becoming game developers (better money, more fun, and higher status among peers) but the real game devs are going to blow you and your toys away (you are using the wrong tool for the job). Basic programmers did this in the 80s and we all know what happened to that LOL.


The Web browser will eventually be an OS. It will provide a common language runtime for graphics and code execution.


No, Python is what John should be teaching his son, not fucking JavaScript.



Since when does swearing preclude civility?

EDIT: Rather, I thought society like HN had mostly moved past such silly problems with words who hurt nobody.


Coarse language is (at least perceived to be) correlated with a lack of interest in reasonable discussion; If someone says "he should be teaching his son Python, not Javascript", you might be able to have a decent conversation with them about the relative merits of the languages. If someone says "... not fucking Javascript", that's not gonna happen.

Additionally, coarse language gets people's backs up -- people use more of it when they're... "being emotional", as it were, and it tends to bring out emotional (cf reasoned) responses in kind.

In other words, profanity is mutually causal with poor-quality discussion, by virtue of it being mutually causal with emotion overriding reason. I'd consider anything satisfying that property to be uncivil by definition, but that's semantics -- regardless, I conclude that coarse language is generally out of place on HN.

(Incidentally, I see dismissing attitudes you disagree with as "silly" and "to be moved past" in a similar light -- like profanity, it stands in lieu of reasoned argument, and the only effect it may have is to annoy those who hold the original sentiment.)


I don't see what swearing has to do with reasonable discourse - logic stands regardless of how it is communicated. Furthermore, there is always room for expression that is related to the topic. Are we limited to reasonable discourse? Are we not allowed to express how we feel about the topic at hand? I don't think that lack of reasonable discussion implies lack of civility. People can be passionate (even expressing this through profanity) without detracting from the discussion at all.

Nonetheless, I see your point when extrapolated to wider use. I also admit that I am biased in also hating Javascript; in a different context, I'm sure I'd be equally as upset.


Actually, you make a good point, there's no reason profanity must "stand in lieu of reasoned argument". In fact, a persuasive case can be made all the more impactful and compelling with a bit of coarse language, tactically applied to convey emotional context -- consider some of the content on, say, Cracked, or the Oatmeal.

So, on further reflection, I guess what irked me more was that the original post contributed nothing to the conversation, rather than the profanity therein; it's just easier to notice things present than things absent.

I suppose there's a "quality hierarchy" of sorts in my mind (loosely corresponding to upmod/abstain/downmod):

  - well-argued comments, in which case manners are incidental
  - poorly-argued comments, but at least they're polite
  - neither well-argued nor polite


I agree with the guy's actual statement but I agree with you that the coarse phrasing was unnecessary and unhelpful here


Apologies. My statement should have read:

"No, Python is what John should be teaching his son, not the abysmal JavaScript."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: