I thought this story was cool:
"[...] They were using the software rasterizer on the iPhone. I patted myself on the back a bit for the fact that the combination of my updated mobile renderer, the intelligent level design / restricted movement, and the hi-res artwork made the software renderer almost visually indistinguishable from a hardware renderer, but I was very unhappy about the implementation.
I told EA that we were NOT going to ship that as the first Id Software product on the iPhone. Using the iPhone's hardware 3D acceleration was a requirement, and it should be easy -- when I did the second generation mobile renderer (written originally in java) it was layered on top of a class I named TinyGL that did the transform / clip / rasterize operations fairly close to OpenGL semantics, but in fixed point and with both horizontal and vertical rasterization options for perspective correction. The developers came back and said it would take two months and exceed their budget.
Rather than having a big confrontation over the issue, I told them to just send the project to me and I would do it myself. Cass Everitt had been doing some personal work on the iPhone, so he helped me get everything set up for local iPhone development here, which is a lot more tortuous than you would expect from an Apple product. As usual, my off the cuff estimate of "Two days!" was optimistic, but I did get it done in four, and the game is definitely more pleasant at 8x the frame rate.
And I had fun doing it." 
And JC got it done in 4 days....
This isn't 10x programmer anymore, this is 30x!!
He's a great developer and has always pushed boundaries. I look forward to his postmortem after this project is finished.
And yes, I'd love it if the compiler (or other static code analysis) could detect how pure various bits of code are, and give reports. For far too long, compiler authors have treated compilers as a big opaque box that end users (developers) submit code to, and the compiler hands out code as if from on high. Smart developers want to have a 2-way communication with their compiler, learning about all sorts of things -- functional purity, headers over-included, which functions it decided to inline or not (especially in LTCG), etc. It's not the 1960s anymore -- developers aren't bringing shoeboxes of punchcards of source code to submit for offline processing. Let's get closer to a coffee shop where we can talk in realtime.
In the immediate future, GHC is going to become more interactive by adding "type holes". Essentially, you can just leave out parts of your program and the compiler will tell you what type needs to go there. So instead of writing your program and checking if it typechecks, the type system can actually help you formulate the code in the first place!
Further afield, a bunch of people at the lab I'm working at are working on interactive systems that use a solver running in the background to solve problems for the programmers. These can be used to do all sorts of things from finding bugs to actually generating new code. Being interactive lets the solver suggest things without being 100% certain--the programmer can always supply more information. This also makes the solvers easier to scale because if it's not terminating quickly, it can just ask for more guidance from the programmer.
I think the general trend towards more interactive development is pretty exciting.
(For non- or fledgling Haskellers, "undefined" has any type, so if you define a function that plugs into your code and make its return value "undefined", then you can look at the type signature of the function and learn what the compiler proved about the type of that function. Pretty handy!)
However, it may be (if you'll pardon the expression) garbage collecting people that we don't know about, or cloning them in such a way that their multiple representations are indistinguishable.
This sounds like a game development reference that I'm missing. Can anyone explain?
This is a hugely successful pattern throughout a number of aspects of gaming, graphics being one of the most classic examples. Double-buffered graphics don't suffer as much from tearing and other display artifacts.
Not really, no. Immutability comes at the cost of performance compared to mutability. The gap is shrinking between the two, but it's still wide enough that using pure immutable structures for frame buffers, shaders and other graphical concepts is simply not an option to write games.
Haskell is interesting in the sense that it doesn't prevent you from using mutable structures (e.g. Lenses, Writer) but it encodes this information in the type system. I'm really curious to read the conclusions that Carmack will draw from his experience but I wouldn't be surprised to read that at the very low levels, mutable structures are just unavoidable for high performance games.
Also, mutable structures accessed by concurrent threads is a problem that's much less difficult than most people claim, and it's often much easier to reason about locks and semaphores than about immutable lazily initialized structures.
> using pure immutable structures for frame buffers, shaders and other graphical concepts is simply not an option to write games.
Seeing as people have written games in Haskell, this is clearly not true.
> Haskell is interesting in the sense that it doesn't prevent you from using mutable structures (e.g. Lenses, Writer)
Lenses and Writer both only use immutable data. It is possible to use actual mutable data in the ST and IO monads.
> but it encodes this information in the type system.
This is true of IO, but not of ST. With ST, runST :: (forall s. ST s a) -> a, hides the effects.
> it's often much easier to reason about locks and semaphores than about immutable lazily initialized structures.
I don't know what you mean by this. In terms of functional correctness, immutable data-strucutures, lazy or otherwise, are much easier to reason about. If you are talking about resource-usage, sure, it's a little harder to reason about lazy data-structures than strict ones, but give me a space leak over a race condition to track down any day.
Not really. There's no question whatsoever that GHC can run a fine Wolf3D on fractions of a modern hardware setup. You could do it in pure Python with no NumPy. There's tools to help with the laziness stuff and a 3D rendering loop will fit those perfectly.
But the performance limits of immutable structures for simulation and graphics are certainly interesting to me.
Not really, 3d rendering in haskell via opengl is not new or interesting at this point. Frag is 8 years old for example:
You can even stream textures asynchronously using PBOs (pixel buffer objects), and use dual PBOs like double buffers (or using copy-on-write techniques to only re-upload dirty rectangles...)
[edit: If I'm wrong... ouch. But it's been a while.]
Games often want physics/model threads run with a consistent timestep, but have the rendering thread run as fast as possible.
But it's nice to hear, hopefully John Carmack and id will make at least one more great in the future.
That doesn't make Carmack any less of a great developer in how he pushes the limits of current hardware.
His games have been C++ for quite some time now.
Another factor is that the C code that comes out of id Software is just damn good. Go ahead and read the Quake 3 Arena source code: it's one of the better reads out there, as far as C is concerned. The Doom 3 source code is C++, but it's kind of weird C++ and I would be wary of learning C++ from it. Carmack has spoken about how his C style is just so much more mature than his C++ style, and this is exacerbated in these examples because Q3A is the last game in C, but Doom 3 is the first game in C++.
Yes, he's an icon in the C world.
This is what I mean for quite a while, id Tech 4 was released in 2004!
And quite insulting.
I'm not sure if my thought is obvious or insightful, but I like it. I try to write all my new general/reusable code as pure functions whenever possible (which is almost always).
I always wondered about color choices for some games' health packs.
In the article linked from his tweet, Carmack describes how he ported Wolfenstein3D to the iPhone. He apparently didn't start off with their own original code base, but used the open source project "Wolf3D Redux" (http://wolf3dredux.sourceforge.net) as a starting point.
This was possible because id open sourced their original game, and "Wolf3D Redux" is distributed under the GNU GPL v2.
Carmack also states: "I think the App Store is an extremely important model for the software business."
Therefore my question: is it possible to publish GPL'd games in the App Store? I seem to remember that this was not possible, since ToS of the app store impose further constraints which is forbidden under the terms of the GPL.
I have verified that first hand by asking Apple to remove derivative of my GPL-licensed software that they were pirating and Apple complied.
In case of Wolf3D my guess is that ID Software got commercial license from the author(s) of that software (copyright holder of GPL software can choose to also license it under another license).
Stockfish Chess  is the one GPL v3 licensed app that I know of which has been available for... quite some time, now. I've seen several other Chess apps build on that engine, which go as far back as when I was just starting iOS development.
Now, VLC was the one case where one volunteer for the project invoked the GPL to get Apple to take an iOS port of the app off the store . Based on that situation, it seems that if the original authors of a GPL licensed codebase want to pursue a claim against an app that uses that code, they can, and Apple will take it seriously.
I don't believe that situation has changed.
EDIT: For your own sake, it's probably best if you approach the original author to see if they can make an exemption, in writing, for the DRM situation. IANAL, but that seems like it would be the cleanest way to go about handling GPL licensed code without issue.
 - http://stockfishchess.org
 - http://www.tuaw.com/2011/01/08/vlc-app-removed-from-app-stor...
However, as far as I know you're right. If there's no clear authority as to who retains ownership and licensing rights to the code, and the contributions made to it, it's going to get messy.
A lot of projects, notably including Linux, intentionally avoid copyright assignment to make it impossible for anyone to relicense the codebase. Making sure that there are thousands of copyright holders from hundreds of jurisdictions, many of who are not easily reachable or even knowable, all bound by common license terms protects the project from situations where some project participants would do something not agreed by the rest, either willingly or because they were forced to (eg. through bankruptcy).
LWN.net has covered both sides of this subject quite well, in recent months, with the tedious process of relicensing VLC  and the GnuTLS copyright assignment controversy .
The politics of OSS make for some excellent reading.
 - https://lwn.net/Articles/525718/
 - https://lwn.net/Articles/529522/
That only works in the US, there are various jurisdictions where it is impossible to allow third parties to relicense one’s IP in any way they want.
I admit that I'm not familiar with international copyright law. I do know that several GNU projects, and other high profile open source projects, have a policy in place wherein contributors have to agree to a copyright assignment agreement before they can contribute their modifications.
If that's not possible in some regions, they clearly still have some means of continuing to uphold their own IP. I suspect it's not as harsh as completely avoiding contributions from some regions of the world. Unless I'm mistaken.
However, similar agreements (i.e. an artist allowing a record company to use his song however they wanted) signed in the 70s have been found to not extend to ‘digital’ uses, of e.g. music, by German courts as this use case did not exist yet at the time the license was granted.
I’m really saying that this is an annoyingly complicated matter and best avoided by not requiring relicensing – and I’m not a lawyer, of course (and also too lazy to find references now).
And, naturally, there is the issue whether you trust the FSF enough to do the right thing.
The trend seems to be towards difficult 3D controls, but you cannot write off the entirety of 3D games because you haven't seen good samples.
I also understand that I'll probably get down voted into oblivion as every single noscript users is probably on this site. But it's true...
Twitter is. Twitter is an absolute clusterfuck technology wise, they are the prime example of doing everything wrong. Defending their idiocy does not reflect highly on you.
I personally block Flash by default (mostly to stop audio ads), and I have no problem turning it back on when a site requires it.
I don't know what's happening to web development, but since this app fever developers are fighting for eye candy and LESS accessibility. Sometimes I see a page that fails to work without JS, open the source and see just JS code. Where is the content? Web apps are often times also walled gardens, and completely break the functionality of the browser (based on linking, rendering text & images and using the back button).
Now even simple websites completely disable access to content just to show some silly animation.
The move to "web-page-apps" is not about eye candy, it's about speed, responsiveness and yes, usability. About not trying to awkwardly force an app down the http/html way.
Sorry for the PowerPoint presentation: http://www.cs.princeton.edu/~dpw/popl/06/Tim-POPL.ppt
The crux is that there are a lot of tools available when working with functional programming languages that allow programmers to avoid pitfalls common when working with mutable structures. The fact is, game programming is very different in some ways from, say, web programming. In web programming, you modify small parts of the system at a time and you probably have a nice database with ACID guarantees, you can wrap changes in a transaction and get on with your life, or just make small changes without transactions. Games just use big collections of in-memory objects to represent a complex state that changes significantly every frame, and if that was how your web application worked you'd probably call it fragile.
I can give a more concrete example. Suppose you write a quick game where you shoot missiles at invading aliens. You register a collision between the missile and the alien, so you send a "collide" event to both. Except the missile responds to the collision by exploding, which deletes the alien object, and in C++ you might send a "collide" event to a dangling pointer instead of the alien space ship.
Yes, there are lots of tools you can use to fix this. "Use immutable data structures" is one of the more fool-proof ways to do it. So right there, by using immutable data structures everywhere, you've eliminated certain classes of bugs.
This becomes more important when you have multithreaded applications: immutable data is just inherently nice when you're using several cores: you can get rid of a bunch of locks and other synchronization techniques, many of which are a constant source of difficult-to-debug bugs for even experienced developers.
Was a rational thing to do? Maybe not, but it was fun to do.
It's not about the result, but the process of building it. Lessons learned can be applied to situations where the result does matter (and, hopefully, blogged about).
Really? Because it works just fine for me. Maybe they're doing some sort of A/B testing or partial roll-out around links?
I think its still too early to say that Haskell gives immediate, quantifiable, benefits for multithreading but it sure gives more avenues to tame your code and lots of freedom for people to code different concurrency solutions on top of it.
It's probably along those lines, though I doubt his wife and kid(s) would allow him to do that these days ;)
I mean, consider everyone's favorite sort method. Seeing it implemented in any language does little really show how amazing the original insight was.
I rush to say that I am highly interested in seeing this. Nor do I question what Carmack is looking to see. It is strictly the talking heads around this that have me somewhat... off.
In the Clang world Carmack usually lives in, the tools for game development are extremely mature. There is decades of knowledge and experience that went into the current generation of 3D Engines/Games.
Game Development in Haskell on the other hand is extremely new, as in never been done on a larger scale. Of course alot of the knowledge and experience can be reused, but it is starting from scratch to a large degree.
So id see this as an experiment of a very talented programmer who wants to see if he can translate his immense knowledge of graphics programming into a working game prototype built with a functional language like Haskell.
Now, I have little doubt that a lot of this is because he used to have to program so close to the metal. The abstraction was the computer, to a large extent. The hope now, I suppose, is that he can focus on creating abstractions with the aim of a maintainable and flexible engine.
I guess I'm just prejudiced by seeing "case stories" held up as some sort of "see, this person was able to do it, language/technique/tool/whatever X is ready for everyone to use! And will solve all problems!"
So, yeah, I'm projecting. No, I don't know why. :( Sorry.
So the interesting aspect here is that Carmack's intuition has told him that there might be something worthwhile for the videogame industry to look into functional programming languages. It probably won't pan out. But if there's any way it can, then Carmack will find out how to make it a pragmatic way for studios to build large codebases.
Of course, Wolf 3D won't become a case but, if it goes okay, it could be a step to a case for Haskell eventually.
The D language has a lot of features that should appeal to large software projects, including a certain amount of feature overlap with the functional languages. It hasn't really taken off yet, but it could be a killer app or two away from taking off. Maybe.
I say all that to say that Go seems fantastic for server-side programs but I suspect other prospective languages will be better for large, performance-critical games, but I've been wrong before.
Also, I doubt Carmack would be interested in Go -- based on his previous talks, he seems to be going for a more functional approach. Go is imperative to the bone.
There's no technical reason I can think of why Go wouldn't work just as well as C# (XNA/XBLA/MonoTouch/Android) or Java (Minecraft etc) for game development.
Garbage Collector sweeps can indeed be a performance problem but there are obvious ways of minimizing it's impact during gameplay.
Overall I think (read guess) that unless you are doing some sort of physics simulation, typical game logic requires relatively little cpu power and for the graphics there is hardware acceleration doing the heavy lifting.
> Overall I think (read guess) that unless you are doing some sort of physics simulation, typical game logic requires relatively little cpu power and for the graphics there is hardware acceleration doing the heavy lifting.
Well, it of course depends on the game. AAA games spend all the CPU budget allocated to them.
Granted I haven't used Go myself (leaning towards Rust) but I've read that Go is already being employed in production scenarios with good results.
which are game examples as part of a game engine framework written in Go, seems to move smoothly with lots of objects including 2d physics.
Carmack seems to be interested in correctness and safety, neither for which Go has any significant offerings. In fact, it seems to be a step back for a "modern" language -- it has null pointers, no way to explicitly enforce immutability, no generics, and a crude and verbose way of handling errors. A panic() call in the code brings down the entire program.
Not necessarily having to do with coding in another language. Sometimes it's just a recompilation, or providing a new hardware abstraction layer (HAL). But sometimes it can lead to major re-development of the whole thing, be it using the same language, or a different one.
The other way is to not use state. If you squint a bit, you can see that if you explicitly encode your "state" as function arguments you can kind of update it by calling the function recursively:
go 0 acc = acc
go n acc = go (n-1) (n * acc)
fact n = go n 1
In general, if you are OK with this kind of non-destructive updates that I used here, you can encode all your state as extra parameters that you thread around your functions. You can do this by hand in most cases but in some situations the state is very pervasive and correctly threading it around can be complex and error prone. In that case, you can look into use things like the State monad (not to be confused with the ST monad!) to do that implicitly pass that parameter around for you.
Personally, I would love to see more functional possibilities in imperative languages, but have regular imperative possibilities as well. E.g. running the game and keeping state seems pretty suitable for imperative code. But doing certain updates or calculations could be expressed better with functional code.
In this case you want ot have a system that reacts to an event that fires on a regular interval (as well as other sorts of input events). If you search for Functional Rdeactive Programming you will find some example libraries out there that try to do this in a pure manner (although I would personally have to say that this is all still a bit on the experimental side of things).
That said, Haskell still lets you do things the imperative way if you want! All you need to do is put the impure code in the IO monad, where it belongs.
You are only forced to be purely functional if you want to or if whoever is calling you must be a pure function. So basically, the idea is that your `main` function is impure code in the IO monad and it can call either more impure code or pure "helper" functions. Increasing the percentage of your code that is pure is a nice thing but its not mandatory.
A loop is a recursive function. State is the arguments to the function. You pass an updated state to the next iteration of the loop. Haskell provides nice abstractions to make this seamless.
gameLoop curState = do
inputs <- getInputs
let newState = gameIteration curState inputs