But don't tell John Carmack, or any of the other many people who have been writing simulation engines and 3D rendering engines since around when you were born to use your web browser scripting engine. Seriously. (Also before I was born.)
And unsecured access to the graphics stack is a terrible idea. Flash already randomly causes my GPU to crash.
...however, this is also true of every other part of web programming: HTML and CSS are similarly a mixture of academic navel gazing nonsense and ugly cludges designed by committee.
That's one way of looking at it. Another point of view is that HTML already does far more than it needs to, to serve as a perfectly adequate hypertext format. It could be argued that it's a mistake to try to morph the web-browser into a crappy OS and / or a poor man's UI remoting technology, when we already have perfectly good operating systems and technologies for remoting out UIs or delivering applications to remote locations.
Perhaps instead of trying to make the browser a Swiss-Army-Do-Everything-Polymorphic-FrankensteinsMonster-OperatingSystem-XServer-ApplicationRuntime, we should just use the browser to browser hypertext, and hand off handling of these other issues to purpose specific software that was designed to do that.
We only have one web stack, it may differ a little at the bleeding edge, but that the core its the same stack deliverable to pretty much every user and device in the world.
(disclaimer, I work on aforementioned Firefox OS :)
You're implying I'm closed-minded because I have an opinion. I would actually like to hear your reasons for saying JS is crap/silly.
And don't give me that "when you were born" line. It's built to hinder fresh, new ideas and perspectives.
A dinosaur who tries to refrain from using the "When I was your age..." line.
Perhaps. But we should be able to do better. Pygame, for example -- The example I found looks quite nice and easy to use:
If you are going to depend on OpenGL anyway, Pyglet is even nicer for newbies
Like most C/C++/ASM programmers I hated it at first and I'm under no delusion that it's going to replace C/C++/ASM any time soon. But like many say, "the right tool for the job". By any measure Lua is complete shit and yet Carmack made it popular by using as the glue for most of his games.
And even better, when it works you just send you friends a link. No installing. No worries if they are on OSX and you are on Windows or Linux.
> And unsecured access to the graphics stack is a terrible idea.
WebGL does not provide unsecured access to the graphics stack so I'm not sure what this BS remark was about.
This just isn't the case. In recent games iD software has moved away from using scripting languages at all. Later in the same talk, Carmack praises iOS for reversing the trend against Java and bringing back native languages. Saying that native languages are the only way you can get predictable performance for things like smooth scrolling.
No one gives you unsecured access to the graphics stack. WebGL shaders are parsed and modified for security (for example, inserting things like bounds checks).
(Of course, there are security risks with any new technology.)
I have similar thoughts about allowing web sites to install fonts. Font rendering is hard, and font engines have not traditionally been exposed to attack. The idea that we have found all the bugs in them is absurd. There were security problems found in png many years after release, and that was something people knew needed to be secure from the beginning.
Furthermore, of course we have an idea of the risks here, and the ways to prevent them. A huge amount of effort has gone into that, but in speccing WebGL and in implementing it, and in collaboration with GL driver vendors. And security experts have been poking at GL and WebGL for a while, just like they poke at JS engines and video codecs.
3D is also a lot to give up.
I understand if you happen to not care about speed or 3D, and that's fine, but most people do.
In particular, I think allowing malicious people to run programs on your computer by default because, hey it's safe in the sandbox, is a terrible idea.
Fair enough, I agree but put the line in a different place.
Another example: When you get bufferData in WebGL, it doesn't just call glBufferData with those arguments. It carefully checks those arguments for validity, including against the current state (currently bound buffer, its size, etc.). This ensures that the GL driver only sees calls with valid, reasonable info. It does incur a performance hit, but as you can see from WebGL demos on the web, it isn't terrible.
GL drivers in general are pretty complex. But the very small subset of GL that WebGL uses - both in terms of the commands used, and the content it passes in those commands as just mentioned - is much simpler to secure. Browser vendors have been working with GL driver makers on safety for a while now to improve things further.
Our generation's Apple II is the ARM or any ARM based development board. It's ubiquitous and it's (at least compared to the x86 architecture) rather easy to program yet very powerful. Those are the reasons why the Apple II bred good programmers like almost no other machine in it's day.
I agree there's a lack of ways to develop directly on ARM, but that should change pretty soon with the Raspberry Pi as you point out, or any other such platform. For the rest of us there are development boards which usually come with quite a bit of documentation on their usage. Anybody determined enough to learn that stuff will be capable to do so, I think.
Python again is more like a magic wand that does most things for you. When I learned coding I constantly kept wondering how those calls actually worked and what they did. I just couldn't accept that say, print just prints stuff to the screen. I was wondering how it did that. I finally gave up on questioning at the edge of the ALU.
The reason I told this is, that I think a high level language like Python will teach you a lot about how to structure problems into smaller parts and will improve your "algorithmic thinking", but you'll still be pretty clueless about what's actually happening behind the code.
And exactly that sort of hacking on a lower level is what comes to my mind when I think of computers like the Apple II, which were basically just glorified calculators by today's standards.
It was horrible, but I didn't knew it. And it worked. I've shown it in school, and to my parents, and I was a wizard. The methodology, and style conventions came later.
Nobody really writes assembler today proffesionaly anyway. It is too hairy now, and we have better compilers. Don't push it on kids.
 by nobody I mean almost nobody, and yes, arm assembler is better, but still show me a big application written in it
> There was no internet as we know it today
It was the age of BBSes. I spent thousands of hours trolling random BBSes for bits of knowledge. There also were Compuserve, the Source, and Delphi -- today you'd think of them as AOL-like walled gardens. (In the early days AOL was considered just a Compuserve clone with a GUI front end.)
> 6502 assembler... not like that's a trivial task
I wrote an Apple ][ game in 6502 assembler for Compute! Magazine when I was 15. (Yes, back then, software was distributed in magazines... people would type the code in...) I urge you to find an online Apple ][ simulator and try it. It's crazy and fun. But yes, it's like going from the big lego blocks to the tiny lego blocks. You say "Python will teach you a lot about how to structure problems into smaller parts" -- writing a simple game in 6502 assembler will teach you (force you) to structure your code into even smaller parts.
I personally like bigger lego blocks and the higher-level abstractions we have today, I think it makes me ridiculously more productive and now that's what I crave (productivity = more time spent with my kids) whereas when I was younger I loved digging into the internals just because they were there.
> Apple II... glorified calculators...
Awww hell. Now it's on.
I do like the bigger lego blocks as well. They're great for doing any actual work. I'm also not trying to force assembler onto anyone. But as you point out, digging into the internals is a whole lot of fun and a very interesting journey through the world of computing. It kind of feels like discovering the very essence of it in a way. As a bonus point it teaches you a whole lot of things that might be useful one day.
Bottom line, even with today's powerful languages it's never wrong to dig a bit deeper, if just for experience and insights.
Which you didn't need to do because it also came with commented ROM listings.
Utterly amazing how far Apple has come (or gone) in the intervening 30+ years...
The wise elders back then tried to warn us that if you learned to program with BASIC it would take years (if ever) to unlearn those bad habits, but we ignored them. The result is thousands upon thousands of programmers who produce nothing but spaghetti code.
Now we have an entire generation of programmers for whom statically typed languages, and compiled languages in general, are completely alien. There is a time and a place for dynamic typing (just as there is a time and place for GOTO) but to start with a language like javascipt inculcates habits that will take years to undue.
The wise elders back then tried to warn us that if you learned to program with BASIC it would take years (if ever) to unlearn those bad habits, but we ignored them. The result is thousands upon thousands of programmers, and the creation of every interesting company from google to iD. That's not a bug, it's a feature.
Legacy VB almost fills this gap, but unfortunately the syntax is a little embarassing. Actionscript is a perhaps closer.
The iPad COULD have been our generation's Apple II if it came bundled with an un-sandboxed Xcode iOS app and Terminal... dreams of an alternate universe.
You might want to argue those points, but this is a complete strawman.
I wonder in what context Carmack's comments are in.
You don't have to write JS though, just use something compiles to it.
> And unsecured access to the graphics stack is a terrible idea. Flash already randomly causes my GPU to crash
Letting only safe operations through to the driver is the biggest part of a WebGL implementation's job. Hopefully HTML 5, WebGL, WebRTC etc can replace Flash. I'd much rather trust those jobs to Chrome than to Flash (Flash provides accelerated 3d too these days btw.)
There's lot to be done still, and you can score bug bounties from Mozilla and Google if you manage to poke holes in Chrome/Firefox :)
What we can expect is hardened shader compilers in the graphics drivers. In 2018 or so.
Really? Ever stopped to think it's you that is biased, so much that you pre-emptively null any possible responses with the above cheap trick?
I will never experience the joy people had fiddling around with their Apple II's and I still don't know the first thing about the Commodore 64 besides the name. My attitude toward the "How will the kids ever learn unless they can tinker with it?" nostalgia that comes up on HN has always been: "Meh. No one I work with had an Apple II, and we all still ended up as tech-types."
It was very poignant to realize all at once how the internet provided a chance to be part of an easily-accessible ecosystem, one where you could tell a computer to do something and it just did it. Even if it was in a browser.
In a year or so you will understand fully a) what we are on about b) how very cheap and simple things can be very fulfilling c) why 'older folks' sometimes sigh when yet a faster CPU is eaten 100% by Windows while not significant seems to have changed d) how your computer works internally ; if you are interested on the digital pulse and IC level (I can extend my MSX computers using 74series logic ICs, which is, again very fulfilling) e) how you can do stuff efficiently in very little memory on very slow computers.
That e) point is something which might not seem valuable, but it is and it still is; if you know how to do things on these computers, you understand how computers work and why some assembly code is much slower than other. Although computers are vastly more complex these days, you can extrapolate quite a bit on high level and it will be easier to read current tech documentation about hardware (and software) as well.
Anyway; I would recommend everyone to at least try this as I strongly believe it will help current generation understand better. It's also better for your children (and children of others you might know) to let them tinker and learn this young instead of playing XBOX 360 games (that's my opinion but I believe it sticks). Maybe their friends won't be as impressed with them trying to implement a ray caster on a 3.58 mhz machine, but when they get to 12 they can program the Next Great App and win science fairs while the others, well, can play Skyrim.
I never had such an old system, yet at the same time, I understand the constraints more or less, I now a bit of assembler, C and still question where this knowledge is really benefiting me. (Though I do find it very interesting!)
Maybe I just really don't see the point.
And to your last point, I think a better approach would be to show our children either PyGame or even modding tools for modern games. I just don't think a young child would really be that interested in the archaic inner workings of a slow machine, but could be really interested in making mods. I'm not saying there are no such children around (I'm sure there are), I'm just questioning the approach here.
The point is that there is, in essence, no point. That's the same as the point of working on a Bonzai tree. It has no point except that you can come to inner/outer peace and beauty.
If you read carefully you see that actually I see 'few hours per week', so there is no mentioning of 'trade all the knowledge'; you are not going to work on ancient stuff fulltime.
For children you might be right; I just know what I was like and what the kids I hung out with were like; I grew up in the 80ties and significant parts of that I spent disassembling, soldering, recreating and such of OLD (50-60s) radio's. Because they are EASY to understand and master.
My issue with PyGame for children (versus for instance an Arduino kid, Rasberry PI or Xgamestation or, much cheaper and better documented, an ancient computer) is that I have seen many kids growing up like that (replace PyGame with VB or HTML) and they don't have A CLUE how a computer works. And when they try to learn that, it is hard to make that step from this, basically, blackbox system to how it actually works. You got that, but many don't.
But yes, I'm biased, I just know quite a few people who followed me and are happy with it; I just summed up stuff I/we get from that. I'm probably just crazy :) And I do know you can do this in JS too but people just don't because their computeres are powerful enough to do it with a ton of fancy libs and tools. In my experience it ends up people (and yes there are exceptions; you are probably one of them) just being lazy.
Well, if there is truthfully no point, you could replace working on an ancient computer with actually working on a Bonzai tree, or a rock garden, etc.
But I would assume you meant more "There is no point, aside from gaining an appreciate for how machines worked in an older, more basic form" heh.
Unless you're very lucky, at some point in your career your high-level development is going to get constrained by some very low-level fundamentals. Knowing what's going on in the machine is going to be key to working your way through it.
Not to mention the fact that you'll have the ingrained mentality to always think about the performance and bottlenecks in your code and systems, even if you're highly unlikely to ever hit practical limits.
Knowing that stuff -- more importantly being an experienced practitioner -- just makes you a better programmer overall, and makes you more sympathetic to the hardware that has to execute your code. It's a dying skill and it's very far from being nostalgia when it can bite you in the real world very easily.
That's the point.
10 PRINT "Hello! ";
20 GOTO 10
There's so much joy inside that "]" prompt but I'm not sure how to get him to see it.
I played games nonstop on my (parents') Apple II, and loved it. But seeing those two lines of code produce "Hello!" over and over on the screen blew my mind.
You see, I thought games were amazing because I could manipulate these little worlds, and I could doing anything imaginable within their rulesets. But, seeing "Hello!" scroll forever and ever made me realize that, with this coding thing, there were no rules. The fact that I could make this computer do whatever I wanted, if only I could speak its language, was irresistible.
So of course, I modified the program to say "Hello sray!" and "Hello <this>" and "Hello <that>" and "Sray's brother smells", and so on and so forth. And then I figured out how to add spaces to each line to make a cool zig-zag effect. And so on and so forth... most of us know how it goes.
Anyway, this little trip down nostalgia lane isn't going to help you get your son interested in coding. But it's interesting how different kids react to the same thing.
My father (an amateur singer), used the "Wind beneath My Wings" version for to warm up before practice.
My dad: * runs out to answer the "telephone" *
Me: * trollface
I wasn't the most tech-literate child, but at least I've come a decent way since then.
Most of the stuff I programmed when I was 13 was crap, but it was amazingly funny!
Today I would recommend Python to start tinkering, but to be honest I think it was easier back then. My PC had 512KB of RAM (it doesn't matter, GW-BASIC allowed only 64KB for both your code and your data), CGA graphic card (4 colors), 8MHz CPU and no HDD disk; and even with all these limitations, I think it was easier than Python + PyGame.
I would highly recommend Processing for any graphics playing that you might do. Yes, you have to write a bit more code (it is Java, after all), but it's also much more rewarding and just as portable as something written for WebGL.
And there is no "click download" required like those two libraries would need and also no need for patch updates. Your "download" is just pointing to a URL.
Source: Personal experience porting a multiplayer 2D game
There will be a shitstorm once graphic drivers get exploited through WebGL.
The game the writer self-promotes is a completely different genre than what id Software does (2D shmup versus a 3D FPS with detailed graphics). It eats a whole CPU core for a seemingly trivial game (might just be missing sleeping in the code, I have no clue).
If you use a good codebase, you do not need to write 200 lines just to setup your screen, eg http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenG... .
And never forget that web-based games are kinda non-free unless you can download them and play them locally. You are always at the whim of the developer/hoster/provider. I much prefer games that run natively, disconnected from the internet (attack vector) and whenever I want.
The HTML5 offline (appcache) spec allows you to have resources available when offline (and you don't need to write your own code to stuff things into localstorage...) Besides, localstorage is not considered, by browser hackers, to be the best idea performance-wise. They recommend IndexedDB.
Those specifics aside, my main point is to say that game makers are targeting the browser and browser vendors are very much aware of the pain points they have and are working to correct them. Browsers today evolve much faster than they did a few years ago.
(obdisclaimer: I work for Mozilla)
IndexedDB has unlimited storage.
Static assets (like models, textures, most of the level) should be, well, static, and can be as huge, as you like them.
Also you can write html5 game as one-page website without server part, and downloading it to play it offline is as simple as file->save complete web page as.. in your browser. You can provide installer for your players that just unpacks the game to directory and places shortcut in start menu (no dll-s, no register settings), you can pack your game into chrome app format for some users and place your game in chrome app store.
One thing I think chrome gets wrong is its security policy that by default prevents local web apps (when you open html file on your disk in browser directly) from doing ajax requests on the same directory. That means you need to wrap static assets in html, and that's stupid, I should be able to load xml from the directory my index.html is stored in. But whatever, there's workaround.
I think most games don't really need super detailed 3d graphic, and simplicity of distribution (nothing to install, just click link and play) is the killer feature of html5 for simple casual games.
I think Carmack wasn't referring to how more abstractions could make his life easier. I've never had the time to dig deeply into id Software's sources (which get open sourced after a couple of years, in case someone didn't know), but what I've often heard him complain about were the abstractions themselves! This is basically about how we wrap a couple of layers around the drivers these days - drivers which have become increasingly complex.
See, abstractions are a good thing. General purpose abstractions might be a good thing for a lot of applications. For some, like graphics engines, they're not. Suppose you're working on a cutting edge 3D engine and you want to squeeze the last bit of performance from the machine. What you end up doing is talking to the graphics card as directly as possible, with the least amount of "layers of crap", as Carmack called them, inbetween you and the hardware.
General-purpose abstractions are loaded with stuff you don't necessarily need and come with quite a bit of overhead. Suppose you have 10 layers between you and the hardware, then everything you try to tell the hardware has to go through all those layers and possibly back to you. That's quite a drawback.
Since drivers have become rather complex themselves as GFX hardware has become more powerful in the last 2 decades you need quite a lot of code to talk to it properly.
The same thing goes for setting up the screen for rendering. Windows wants you to be quite verbose about what you're going to do (obviously). That's why you need 200 lines of code to do so.
The only way round that is using libraries that abstract those things away from you, but they do so at additional cost. Obviously it's possible to do the same thing in just one line of <insert your favourite language here> in the same way that it's possible to do the same thing in just a couple of lines of low level code as long as you have a library doing it for you.
However, as pointed out, in an AAA game engine, that's not what you want to do. There's a reason why the "highest common denominator" is usually D3D here. It's because D3D is still reasonably quick (although not as quick as OpenGL) and does enough things for you.
While I believe that OpenGL is better for everyone if only because it's an open standard, I think that one article noting a difference between the two APIs should not lead to the conclusion that "OpenGL is faster than D3D, full stop".
Obviously no. Most web standard adbocates insist that all software will and should be web based. They must be criticized.
It's because you're not looking at the bigger picture. The technology doesn't matter. It's all about the players.
Most people don't want to install something to "test out your new game" with. Even Steam is a higher barrier to entry than a web browser--and that's just a single download. Give players minimum resistance to play your game and you'll have more players.
Sharing is also huge. With browsers, it's trivial. Other systems, you need more context. More instructions. More work for the players who just want to share their experience. Or even better, for developers. "Hey, check out my new game: http://my.new/game. It's buggy, but what do you think about the controls so far?" Zero friction to sharing. Zero friction to playing.
there was only one prof at university i enjoyed. he would stand in the cpu design class and mock the java world and how things get slower despite us having faster cpus.
ps. i earn my money with web work
The HTML scaffolding and JS for even a 2D canvas blit is still longer to type, and is slow.
Now take a language like RoR which I have never used. I can easily look at it and atleast kind of see what is going on.
The abstractions we have now atleast allow for code that can read like a book, and that is a lot less intimidating for someone trying to get started with programming.
it is simpler. it's not as intuitive, because youre putting it in relation to your current experiences. your current experiences make it easy for you to understand ror. but now imagine you wanna change something in the actual ror stack. how simple does that become? not that simple anymore is it?
if you come from a different background and you look at functional programming languages you might say wow that's difficult. but similarly a guy that's only written functional code and looks at ror might say wow you're an idiot.
i hope you get the drift. it's just subjectivity youre talking about.
I can see what you're saying. I still feel like intuitive code == simpler, but we'll have to agree to disagree.
This has made me more interested in diving in and learning some lower level stuff since you believe it is more simple. At the very least it'd be a great learning experience.
The problem with "intuitive code == simpler" is that it doesn't account for the scaffolding your intuitions are based upon. Like it or hate it, C is pretty straightforward in what it claims to do, and assembly moreso.
Remember that a machine is ultimately the world's stupidest idiot savant following your directions according to the rules of its hardware--any abstractions built atop it to make things "more intuitive" will only serve to shield you from the underlying simplicity of the system.
I'm not going to claim that high-level programming is anything other than a productivity boost, but I will state that for learning the system and writing system-friendly code you really need to be able to operate at a low level. Oftentimes, that low-level is very reasonable and as "intuitive" in its spartan domain as Ruby or something similar.
Unless you're the x86 isntruction set. Fuck that guy.
That being said, there are both native and browser games that are great, just like there are some that are mediocre. The browser is imho a great place to prototype a game, make a first game as a newbie or even publish something that's not too resource hungry.
I'd even go as far as saying that WebGL will stimulate the development of native games as well. WebGL based game development is a lot closer to native game development in a way than Flash games are (at least from my limited experience) so new developers might make the switch more easily when their needs exceed what the browser can give them. Exactly the position John Carmack's company is in, although they never had the modern browser at their disposal in the past.
These are exciting times and I suffer a little bit inside whenever I see talented people arguing over them instead of making the best of them.
IMHO it's a much bigger revolution in terms of being able to teach your kid from across the globe over skype (or whatever).
We've got OpenGL now, and people are already writing shaders that do the work of physics and matrix rotations, etc, but OpenCL (with a C) is already popping up (you can get it running on node with a couple of different libraries, and there's a Ruby lib for it as well), which will let us write substantially more "general" code that runs on the gpu.
We were spoiled with traditional gaming; having unfettered access to the CPU, and all of the space we wanted when installing from physical media, is pretty crazy when you think about it.
Space requirements are the real killer, though. Current techniques rely on ever higher-resolution textures, 1024 pixels square or more, including additionally a displacement map (so that the textures appear three-dimensional) generally of equal resolution. These are for character textures -- the environments that they inhabit include literally gigabytes of resources that are streaming in and out of the GPU.
So almost any Modern Warfare game is never going to happen on the browser. We're talking gigabytes of content, as opposed to the megabytes we typically load even on content-heavy sites.
Nonetheless, a mixture of traditional and procedural techniques could get us a lot of the way there. Maybe use up a couple of MB on character textures, which contain difficult-to-generate details, and a few more on level geometry, but generate procedural textures and displacement maps for the environment.
I know it seems like the traditional wisdom is "never gonna happen", but that's only true so long as traditional games are "never gonna" get off the CPU and move most of their code onto the GPU (physics as well as drawing). Once the heavyweight tasks can be offloaded, a new and bewildering world opens up.
But a game built with this environment in mind would likely fare much better. I truly feel that we're just a couple of mobile browser upgrades from widespread accelerated browser 3d.
It's great not needing write hundreds of lines of boilerplate code to get a 2D physics game prototype running, and conversely, it is nice to get awesome graphical support using directx sdk, much of it handled in there by sane API.
Whatever floats your boat.
Pixeltoaster is a dead easy to use framebuffer wrapped on top of GL/SDL. It also comes with a timer and keyboard/mouse input, it's pretty much as much fun as you can have in C++.
That can't be serious. I bet he didn't try it on GMail.
Are we going backwards in terms of technology?
Hype can be a very bad thing for technology. I realize that web developers are creating quite a clamor with their aspirations of becoming game developers (better money, more fun, and higher status among peers) but the real game devs are going to blow you and your toys away (you are using the wrong tool for the job). Basic programmers did this in the 80s and we all know what happened to that LOL.
EDIT: Rather, I thought society like HN had mostly moved past such silly problems with words who hurt nobody.
Additionally, coarse language gets people's backs up -- people use more of it when they're... "being emotional", as it were, and it tends to bring out emotional (cf reasoned) responses in kind.
In other words, profanity is mutually causal with poor-quality discussion, by virtue of it being mutually causal with emotion overriding reason. I'd consider anything satisfying that property to be uncivil by definition, but that's semantics -- regardless, I conclude that coarse language is generally out of place on HN.
(Incidentally, I see dismissing attitudes you disagree with as "silly" and "to be moved past" in a similar light -- like profanity, it stands in lieu of reasoned argument, and the only effect it may have is to annoy those who hold the original sentiment.)
So, on further reflection, I guess what irked me more was that the original post contributed nothing to the conversation, rather than the profanity therein; it's just easier to notice things present than things absent.
I suppose there's a "quality hierarchy" of sorts in my mind (loosely corresponding to upmod/abstain/downmod):
- well-argued comments, in which case manners are incidental
- poorly-argued comments, but at least they're polite
- neither well-argued nor polite