Hacker News new | comments | show | ask | jobs | submit login
John Carmack starting port of Wolf 3D in Haskell (twitter.com)
330 points by bobfunk 1678 days ago | hide | past | web | favorite | 137 comments



In case you were not aware, the iOS source code for Wolfenstein 3D Classic Platinum is licensed under GPL. It is available here:

http://download.zenimax.com/idsoftware/src/wolf3d_ios_v21_sr...

I thought this story was cool:

"[...] They were using the software rasterizer on the iPhone. I patted myself on the back a bit for the fact that the combination of my updated mobile renderer, the intelligent level design / restricted movement, and the hi-res artwork made the software renderer almost visually indistinguishable from a hardware renderer, but I was very unhappy about the implementation.

I told EA that we were NOT going to ship that as the first Id Software product on the iPhone. Using the iPhone's hardware 3D acceleration was a requirement, and it should be easy -- when I did the second generation mobile renderer (written originally in java) it was layered on top of a class I named TinyGL that did the transform / clip / rasterize operations fairly close to OpenGL semantics, but in fixed point and with both horizontal and vertical rasterization options for perspective correction. The developers came back and said it would take two months and exceed their budget.

Rather than having a big confrontation over the issue, I told them to just send the project to me and I would do it myself. Cass Everitt had been doing some personal work on the iPhone, so he helped me get everything set up for local iPhone development here, which is a lot more tortuous than you would expect from an Apple product. As usual, my off the cuff estimate of "Two days!" was optimistic, but I did get it done in four, and the game is definitely more pleasant at 8x the frame rate.

And I had fun doing it." [1]

[1] http://www.idsoftware.com/iphone-games/wolfenstein-3d-classi...


It would properly take two months, which is properly optimistic, they would get it done in four months instead.

And JC got it done in 4 days....

This isn't 10x programmer anymore, this is 30x!!


Well, you better know the code you have written...


...many years ago!


Carmack has been thinking about functional programming for a while and posted his thoughts on applicable lessons for C++ a year ago:

http://www.altdevblogaday.com/2012/04/26/functional-programm...

He's a great developer and has always pushed boundaries. I look forward to his postmortem after this project is finished.


thanks for the excellent link. the comments on the article are also very nice. here is one from "NathanM" (nathan-c-meyers perhaps ?):

And yes, I'd love it if the compiler (or other static code analysis) could detect how pure various bits of code are, and give reports. For far too long, compiler authors have treated compilers as a big opaque box that end users (developers) submit code to, and the compiler hands out code as if from on high. Smart developers want to have a 2-way communication with their compiler, learning about all sorts of things -- functional purity, headers over-included, which functions it decided to inline or not (especially in LTCG), etc. It's not the 1960s anymore -- developers aren't bringing shoeboxes of punchcards of source code to submit for offline processing. Let's get closer to a coffee shop where we can talk in realtime.


I think things are trending towards being more interactive.

In the immediate future, GHC is going to become more interactive by adding "type holes". Essentially, you can just leave out parts of your program and the compiler will tell you what type needs to go there. So instead of writing your program and checking if it typechecks, the type system can actually help you formulate the code in the first place!

Further afield, a bunch of people at the lab I'm working at are working on interactive systems that use a solver running in the background to solve problems for the programmers. These can be used to do all sorts of things from finding bugs to actually generating new code. Being interactive lets the solver suggest things without being 100% certain--the programmer can always supply more information. This also makes the solvers easier to scale because if it's not terminating quickly, it can just ask for more guidance from the programmer.

I think the general trend towards more interactive development is pretty exciting.


There's already a very primitive version of "type holes" available, namely, undefined. I realize it's not as advanced as what's to come, but I find myself using it somewhat frequently.

(For non- or fledgling Haskellers, "undefined" has any type, so if you define a function that plugs into your code and make its return value "undefined", then you can look at the type signature of the function and learn what the compiler proved about the type of that function. Pretty handy!)


Type holes themselves are already included in the HEAD of the GHC trunk, and will be included with the next release I believe. Undefined is useful, but you can't get the types of a specific subexpression easily -- with type holes, you can.


Slight upgrade: turn on the -XImplicitParams flag and then use ?nameGoesHere instead of undefined. Detailed type information will leak out in the errors or, if it can infer all of the types, the type of the top level expression that contains your ?implicit.


Whose postmortem, the project's or Carmack's? With Haskell, you never know...


Hey Haskell never killed anyone that we know of. That would be an observed mutation of state.

However, it may be (if you'll pardon the expression) garbage collecting people that we don't know about, or cloning them in such a way that their multiple representations are indistinguishable.


Maybe if all of the objects just referenced a read only version of the world state, and we copied over the updated version at the end of the frame… Hey, wait a minute...

This sounds like a game development reference that I'm missing. Can anyone explain?


He's referring to the utility of immutable data for solving certain parallelism issues - rather than attempt to coordinate all the code that uses a data structure, you can double-buffer it and queue up the write events for the "next frame" instance.

This is a hugely successful pattern throughout a number of aspects of gaming, graphics being one of the most classic examples. Double-buffered graphics don't suffer as much from tearing and other display artifacts.


> This is a hugely successful pattern throughout a number of aspects of gaming, graphics being one of the most classic examples.

Not really, no. Immutability comes at the cost of performance compared to mutability. The gap is shrinking between the two, but it's still wide enough that using pure immutable structures for frame buffers, shaders and other graphical concepts is simply not an option to write games.

Haskell is interesting in the sense that it doesn't prevent you from using mutable structures (e.g. Lenses, Writer) but it encodes this information in the type system. I'm really curious to read the conclusions that Carmack will draw from his experience but I wouldn't be surprised to read that at the very low levels, mutable structures are just unavoidable for high performance games.

Also, mutable structures accessed by concurrent threads is a problem that's much less difficult than most people claim, and it's often much easier to reason about locks and semaphores than about immutable lazily initialized structures.


I don't know where to start.

> using pure immutable structures for frame buffers, shaders and other graphical concepts is simply not an option to write games.

Seeing as people have written games in Haskell, this is clearly not true.

> Haskell is interesting in the sense that it doesn't prevent you from using mutable structures (e.g. Lenses, Writer)

Lenses and Writer both only use immutable data. It is possible to use actual mutable data in the ST and IO monads.

> but it encodes this information in the type system.

This is true of IO, but not of ST. With ST, runST :: (forall s. ST s a) -> a, hides the effects.

> it's often much easier to reason about locks and semaphores than about immutable lazily initialized structures.

I don't know what you mean by this. In terms of functional correctness, immutable data-strucutures, lazy or otherwise, are much easier to reason about. If you are talking about resource-usage, sure, it's a little harder to reason about lazy data-structures than strict ones, but give me a space leak over a race condition to track down any day.


It’ll be interesting to see how the performance issues play out, no? In order to get reliable memory behaviour, you still have to go through a certain amount of voodoo to appease the gods that govern the interplay of laziness and GC. There are comparatively few people who really know how to optimise Haskell code from top to bottom—in part because there is such a distance between top and bottom.


"It’ll be interesting to see how the performance issues play out, no?"

Not really. There's no question whatsoever that GHC can run a fine Wolf3D on fractions of a modern hardware setup. You could do it in pure Python with no NumPy. There's tools to help with the laziness stuff and a 3D rendering loop will fit those perfectly.


Sure, Wolf3D is almost a quarter of a century old by now.

But the performance limits of immutable structures for simulation and graphics are certainly interesting to me.


Absolutely, but it is certainly possible.


>It’ll be interesting to see how the performance issues play out, no?

Not really, 3d rendering in haskell via opengl is not new or interesting at this point. Frag is 8 years old for example: http://www.haskell.org/haskellwiki/Frag


Modern OpenGL exploits immutable data for parallelism all over the place. It also lets (and expects!) you to upload model data (vertices, colors, texture-coords, etc) to the GPU, so you only need to re-upload things that have changed.

You can even stream textures asynchronously using PBOs (pixel buffer objects), and use dual PBOs like double buffers (or using copy-on-write techniques to only re-upload dirty rectangles...)


He's alluding to frame buffers.


Do a lot of other objects read the "front" frame buffer besides the video output?


I think the whole point of a "front" framebuffer is that its only purpose is to be written to the screen. You're only ever writing to the back buffer, which is then flipped, at which point you're writing to a new buffer and it's the next frame.

[edit: If I'm wrong... ouch. But it's been a while.]


@obviouslygreen that is why "all of the objects just referenced a read only version of the world state" doesn't make sense to me as a frame buffer analogy...


It sounds like he's talking about double-buffering "model" data - like an array of all actors and their positions. You can't have one thread reading the data while another writes to it, but you can have the reading thread work on an "old" copy of the data while the writing thread modifies the live data.

Games often want physics/model threads run with a consistent timestep, but have the rendering thread run as fast as possible.


Then I guess I was just ignorant. For me he was always one of the C icons and diehards, similar to Linus Torvalds. He's incredibly conservative about games and doesn't value creative game mechanics.

But it's nice to hear, hopefully John Carmack and id will make at least one more great in the future.


From what I can see iD have been an engine company since the days of Quake, it just so happens that they occasionally release a game to show off the new engine they're peddling.

That doesn't make Carmack any less of a great developer in how he pushes the limits of current hardware.


C icons?

His games have been C++ for quite some time now.


If we count major engine releases, then only the last two major engines (id Tech 4 and 5) from id Software have been C++, the previous three were all C (id Tech 2 and 3, as well as the Doom and Wolfenstein engines).

Another factor is that the C code that comes out of id Software is just damn good. Go ahead and read the Quake 3 Arena source code: it's one of the better reads out there, as far as C is concerned. The Doom 3 source code is C++, but it's kind of weird C++ and I would be wary of learning C++ from it. Carmack has spoken about how his C style is just so much more mature than his C++ style, and this is exacerbated in these examples because Q3A is the last game in C, but Doom 3 is the first game in C++.

Yes, he's an icon in the C world.


> If we count major engine releases, then only the last two major engines (id Tech 4 and 5) from id Software have been C++, the previous three were all C (id Tech 2 and 3, as well as the Doom and Wolfenstein engines).

This is what I mean for quite a while, id Tech 4 was released in 2004!


To be honest, it was more "C with classes" style of C++.


I meant C as in "C family of languages". C++ is still closer to C than to Java. I admit mentioning him alongside Linus was misleading. Haskell is a paradigm shift.


>I admit mentioning him alongside Linus was misleading

And quite insulting.


to whom?


Mr. Carmack. Linus is famous, but not a particularly good programmer, and doesn't use C over C++ for technical reasons, but for "ability to offend people" reasons.


I'm a big big supporter of this. I had one thought [1] I wanted to add to the excellent linked article.

I'm not sure if my thought is obvious or insightful, but I like it. I try to write all my new general/reusable code as pure functions whenever possible (which is almost always).

[1] https://twitter.com/shurcooL/status/327249579189870592


One of his best posts imho. Worth reading and reading again for any Object Oriented developers.


"Large scale software development is unfortunately statistical."

Too true.


I also had to make one last minute hack change to the original media -- the Red Cross organization had asserted their trademark rights over red crosses (sigh) some time after we released the original Wolfenstein 3D game, and all new game releases must not use red crosses on white backgrounds as health symbols. One single, solitary sprite graphic got modified for this release.

I always wondered about color choices for some games' health packs.


As much as thats annoying, it does make a lot of sense. If you can assert that everything which bears that logo is going to be medically related (and not just something random), especially in wars and the like, means you can be more certain of someones intentions.


Wars observe trademarks? I thought the Red Cross was established by international treaty. https://en.wikipedia.org/wiki/First_Geneva_Convention


No, but if there is a red cross on white on a sign, it's good to know it's not just advertising from back before the region became a warzone.



Both the cross of the Knights Templar, and the similar Maltese cross used by the Knights Hospitalier are quite distinct from the straight cross the Red Cross uses. I very much doubt anyone anywhere where variations like that are likely to be seen would confuse them.


And that makes it impossible for it to be protected as a trademark?


I didn't say that.


It makes sense, but the red cross is way too generic a symbol. And it makes perfect sense for a health pick up - it's a +, meaning it adds and it's red for health. And I don't see how anyone would mistake a health pickup in a game for a Red Cross™ worker.


We have a product/website that used a blue cross in the logo. Blue Cross Blue Shield started hounding us about it, so we changed the logo to a green cross.


Especially presence/touch-activated explosive devices.


Here's a question for my fellow HNers.

In the article linked from his tweet, Carmack describes how he ported Wolfenstein3D to the iPhone. He apparently didn't start off with their own original code base, but used the open source project "Wolf3D Redux" (http://wolf3dredux.sourceforge.net) as a starting point. This was possible because id open sourced their original game, and "Wolf3D Redux" is distributed under the GNU GPL v2.

Carmack also states: "I think the App Store is an extremely important model for the software business."

Therefore my question: is it possible to publish GPL'd games in the App Store? I seem to remember that this was not possible, since ToS of the app store impose further constraints which is forbidden under the terms of the GPL.


No version of GPL is compatible with the App Store, because Apple requires that you take away rights from users that GPL protects (users must accept 5-install DRM, Apple ID with geographic restrictions, etc.), which is forbidden in section 6 of GPL.

http://www.fsf.org/blogs/licensing/more-about-the-app-store-...

I have verified that first hand by asking Apple to remove derivative of my GPL-licensed software that they were pirating and Apple complied.

In case of Wolf3D my guess is that ID Software got commercial license from the author(s) of that software (copyright holder of GPL software can choose to also license it under another license).


It's possible, though it depends on the wishes of the original authors.

Stockfish Chess [1] is the one GPL v3 licensed app that I know of which has been available for... quite some time, now. I've seen several other Chess apps build on that engine, which go as far back as when I was just starting iOS development.

Now, VLC was the one case where one volunteer for the project invoked the GPL to get Apple to take an iOS port of the app off the store [2]. Based on that situation, it seems that if the original authors of a GPL licensed codebase want to pursue a claim against an app that uses that code, they can, and Apple will take it seriously.

I don't believe that situation has changed.

EDIT: For your own sake, it's probably best if you approach the original author to see if they can make an exemption, in writing, for the DRM situation. IANAL, but that seems like it would be the cleanest way to go about handling GPL licensed code without issue.

[1] - http://stockfishchess.org

[2] - http://www.tuaw.com/2011/01/08/vlc-app-removed-from-app-stor...


Remember, you'll need to contact all the authors and they all have to consent to (essentially) relicense their code. Any open source project that accepts patches may have dozens of actual authors. You can't just ask the project maintainer/chief contributor.


Ideally, a well-organized project would have a copyright assignment process to resolve those sorts of issues.

However, as far as I know you're right. If there's no clear authority as to who retains ownership and licensing rights to the code, and the contributions made to it, it's going to get messy.


> Ideally, a well-organized project would have a copyright assignment process to resolve those sorts of issues.

A lot of projects, notably including Linux, intentionally avoid copyright assignment to make it impossible for anyone to relicense the codebase. Making sure that there are thousands of copyright holders from hundreds of jurisdictions, many of who are not easily reachable or even knowable, all bound by common license terms protects the project from situations where some project participants would do something not agreed by the rest, either willingly or because they were forced to (eg. through bankruptcy).


That's a good observation, some inefficiencies are quite intentional. I didn't even know that about Linux until just the other night, when I was looking for non-GNU GPL projects that didn't require this process.

LWN.net has covered both sides of this subject quite well, in recent months, with the tedious process of relicensing VLC [1] and the GnuTLS copyright assignment controversy [2].

The politics of OSS make for some excellent reading.

[1] - https://lwn.net/Articles/525718/

[2] - https://lwn.net/Articles/529522/


> Ideally, a well-organized project would have a copyright assignment process to resolve those sorts of issues.

That only works in the US, there are various jurisdictions where it is impossible to allow third parties to relicense one’s IP in any way they want.


Interesting. Do you have details on which jurisdictions do not allow this, and why?

I admit that I'm not familiar with international copyright law. I do know that several GNU projects, and other high profile open source projects, have a policy in place wherein contributors have to agree to a copyright assignment agreement before they can contribute their modifications.

If that's not possible in some regions, they clearly still have some means of continuing to uphold their own IP. I suspect it's not as harsh as completely avoiding contributions from some regions of the world. Unless I'm mistaken.


It is not possible to assign copyright in German IP law, it is non-transferable and tied to the creator of the work or his heirs. There are some ways around this for purely software projects, e.g. the FSF Europe has a license agreement, which covers pretty much everything and should be enough for nearly all uses.

However, similar agreements (i.e. an artist allowing a record company to use his song however they wanted) signed in the 70s have been found to not extend to ‘digital’ uses, of e.g. music, by German courts as this use case did not exist yet at the time the license was granted.

I’m really saying that this is an annoyingly complicated matter and best avoided by not requiring relicensing – and I’m not a lawyer, of course (and also too lazy to find references now).

And, naturally, there is the issue whether you trust the FSF enough to do the right thing.


Since it's licensed under the GPL, I should get the source code when I buy the game, right (as a link to a github repo or whatever)?


It needn't be bundled with it, but must be available upon request, for a "charge no more than your cost of physically performing source distribution [...] on a medium customarily used for software interchange" (that's article 3 of the GPL2. The GPL3 is a bit more detailed, but similar in spirit)


I think it's only GPLv3 which forbids distribution on locked-down devices. I would guess that GPLv2 is compatible with the iPhone.


His post about the iOS port of Wolf3D was a great read, and also a crystal clear reminder of why I never play 3D games on a touchscreen-only device. The means of input are so truly terrible that I do not understand why anyone bothers. But, clearly, there's an audience that does not care, and it was neat to read about all the things Carmack tried to figure out how best to pull it off.


You should try Galaxy on Fire 2.

The trend seems to be towards difficult 3D controls, but you cannot write off the entirety of 3D games because you haven't seen good samples.


What would be the immediate benefits of this? Would it be mostly related to multithreading?

And on a side note: As a NoScript user, I'm surprised that I've had to give Javascript permissions to Twitter just so I can visit that idsoftware.com link inside of the tweet. Obviously I don't visit Twitter often, but that should never be the case for it or any site.


It's fun... He's a developer who enjoys doing stuff, it doesn't matter if it has a purpose. To him it does, and he has enough followers that he felt he should share it with people, which makes sense considering I can't wait to see the result.

And I'm sorry, but if you turn off javascript this day and age you should expect more sites to not work then to work. You're turning off part of the browser, and you do not like having to turn it on to use the browser? Seems a little backwards to me... considering the extremely small population of noscript users.

I also understand that I'll probably get down voted into oblivion as every single noscript users is probably on this site. But it's true...


It's reasonable to expect disabling javascript to have a negative effect upon a webpage which is performing some relatively complicated function. If I was running noscript, with no whitelist, and tried to use Google Calendar, I'd expect reduced or zero functionality.

But Twitter's main purpose in life, for end users, is to display text and links. There's absolutely no reason why disabling javascript should cause a link to be inaccessible. This is a legitimate complaint.


And websites are also not just text and links anymore. Twitter would probably have to put man hours into making what ever work for noscript, they don't just do something for no reason. But spending effort even thinking about users without javascript is in my mind a waste of effort.

Anyways I agree that he should still be able to see text/link. But I turned on the site in Google Chrome, with javascript turned off... And it worked perfectly fine. He however has "noscript" which does not work that way, and does stuff preventing the site from doing normal functionality... Completely his fault.


Not a waste of effort, just one more step away from the roots of the web. If someone can't gracefully degrade images, text and links they are a failure of a web developer and have betrayed what the web should stand for.


>And websites are also not just text and links anymore

Twitter is. Twitter is an absolute clusterfuck technology wise, they are the prime example of doing everything wrong. Defending their idiocy does not reflect highly on you.


I think he was just surprised that something like a simple link needed Javascript.


No, I'd agree. I've stopped building <noscript> tags into my sites as virtually every browser supports Javascript now. If folks are smart enough to turn off Javascript they're just as capable of turning it back on.

I personally block Flash by default (mostly to stop audio ads), and I have no problem turning it back on when a site requires it.


Folks don't want to turn it on, because it's not needed for most cases and it's the perfect way (after flash and java) to get malware.

I don't know what's happening to web development, but since this app fever developers are fighting for eye candy and LESS accessibility. Sometimes I see a page that fails to work without JS, open the source and see just JS code. Where is the content? Web apps are often times also walled gardens, and completely break the functionality of the browser (based on linking, rendering text & images and using the back button).

Now even simple websites completely disable access to content just to show some silly animation.


I can't think of a way of getting malware from JavaScript anymore than from a simple link, that's just plain false.

The move to "web-page-apps" is not about eye candy, it's about speed, responsiveness and yes, usability. About not trying to awkwardly force an app down the http/html way.

A broken website/app is simply broken: if things do not work as expected, that's not the fault of webapps per se or JavaScript - it's all doable and not a big deal anymore (the back button thing).


google have a whole competition devoted to the basic idea. it's called pwnium. http://scarybeastsecurity.blogspot.ie/2013/02/exploiting-64-...


>The move to "web-page-apps" is not about eye candy, it's about speed, responsiveness and yes, usability

No, it is about following fads, just as web development has always done. Creating sites that are slower, less responsive, and offer terrible usability is clearly not done for reasons of speed, responsiveness and usability. Enhancing a page with javascript can increase responsiveness and usability. Replacing the page with javascript is moronic.


There are cases where a "full Javascript" page provides clear benefits (eg, GMail).


Perhaps I should have been more clear. I'm just talking about web pages, basically just information exchange. Twitter being the example here. Browser applications are a different thing, and obviously have to be delivered in javascript. There's nothng wrong with browser applications, but pretending web pages should be built as browser applications is insanity.


GMail is just some text and links too. Why does it get a free pass, but twitter doesn't?


GMail does have a no-javascript fallback, that has all of the important functionality.


I'm reading this in elinks right now. Almost no site linked by hackernews works in elinks, but hackernews still do.


The benefits of functional programming languages have been discussed before in the game industry, notably by Tim Sweeney.

Sorry for the PowerPoint presentation: http://www.cs.princeton.edu/~dpw/popl/06/Tim-POPL.ppt

The crux is that there are a lot of tools available when working with functional programming languages that allow programmers to avoid pitfalls common when working with mutable structures. The fact is, game programming is very different in some ways from, say, web programming. In web programming, you modify small parts of the system at a time and you probably have a nice database with ACID guarantees, you can wrap changes in a transaction and get on with your life, or just make small changes without transactions. Games just use big collections of in-memory objects to represent a complex state that changes significantly every frame, and if that was how your web application worked you'd probably call it fragile.

I can give a more concrete example. Suppose you write a quick game where you shoot missiles at invading aliens. You register a collision between the missile and the alien, so you send a "collide" event to both. Except the missile responds to the collision by exploding, which deletes the alien object, and in C++ you might send a "collide" event to a dangling pointer instead of the alien space ship.

Yes, there are lots of tools you can use to fix this. "Use immutable data structures" is one of the more fool-proof ways to do it. So right there, by using immutable data structures everywhere, you've eliminated certain classes of bugs.

This becomes more important when you have multithreaded applications: immutable data is just inherently nice when you're using several cores: you can get rid of a bunch of locks and other synchronization techniques, many of which are a constant source of difficult-to-debug bugs for even experienced developers.



I'm not John Carmack, but I spent 2 hours today trying to implement basic cryptography ciphers in Racket. What was the immediate benefit? Nothing, but it was fun.


During Hurricane Sandy I used the last of my MacBook Air's battery to code up a Huffman Coding processor in Haskell. Same reason: no reason, just fun. (Hell of a way to pass time, too)


Same here. Recently I just re-wrote a language runtime from a toy compiler I had, from C to Assembly.

Was a rational thing to do? Maybe not, but it was fun to do.


Immediate benefits of porting an ancient game that runs fine on just about anything?

It's not about the result, but the process of building it. Lessons learned can be applied to situations where the result does matter (and, hopefully, blogged about).


> As a NoScript user, I'm surprised that I've had to give Javascript permissions to Twitter just so I can visit that idsoftware.com link inside of the tweet.

Really? Because it works just fine for me. Maybe they're doing some sort of A/B testing or partial roll-out around links?


IMO, the biggest benefit of Haskellis that it collects lots of programming language advancements that have been mostly ignored by the mainstream until recently. Haskell does generics really well, has some really cool abstraction facilities (leading to cool things like parser combinators, and a rich typeclass ecosystem) and the type system is really good at letting you write code that is correct if you manage to compile it.

I think its still too early to say that Haskell gives immediate, quantifiable, benefits for multithreading but it sure gives more avenues to tame your code and lots of freedom for people to code different concurrency solutions on top of it.


I think the Twitter link / Javascript issue is caused by Twitter's wrapping all links into their URL shortening system.


I recall Carmack used to go on week-long Sabbaticals, where he would lug his computer to a hotel room and do research.

It's probably along those lines, though I doubt his wife and kid(s) would allow him to do that these days ;)


I'm more than a little curious at what folks are thinking this will prove. I mean, to a large extent this feels like it would be akin to watching a master photographer switch from film to digital. The vast majority of the craft is hard to see at the implementation layer.

I mean, consider everyone's favorite sort method. Seeing it implemented in any language does little really show how amazing the original insight was.

I rush to say that I am highly interested in seeing this. Nor do I question what Carmack is looking to see. It is strictly the talking heads around this that have me somewhat... off.


I think its more like watching a master photographer experimenting with a new way of taking pictures (think HDR or something), to keep your reference.

In the Clang world Carmack usually lives in, the tools for game development are extremely mature. There is decades of knowledge and experience that went into the current generation of 3D Engines/Games.

Game Development in Haskell on the other hand is extremely new, as in never been done on a larger scale. Of course alot of the knowledge and experience can be reused, but it is starting from scratch to a large degree.

So id see this as an experiment of a very talented programmer who wants to see if he can translate his immense knowledge of graphics programming into a working game prototype built with a functional language like Haskell.


In my head, I was thinking of someone like Ansel Adams when I made my comparison. So, I was thinking of pioneers in the field. I believe I see what you mean, that at this point there is plenty of knowledge in digital photography. I was just trying to invoke the idea that the algorithms and thought processes that go into creating a project are not necessarily reflected in the actual program that is written.

Now, I have little doubt that a lot of this is because he used to have to program so close to the metal. The abstraction was the computer, to a large extent. The hope now, I suppose, is that he can focus on creating abstractions with the aim of a maintainable and flexible engine.


Yep, but even then its still just Wolf3D which basically is a 20 year old game/technology. Certainly a good first step into graphics programming in functional languages but i doubt we will ever see something like Unreal Engine 4 written in Haskell. All the effort put into C-based Engines has such a legacy that starting from scratch would be an immense effort for questionable benefits.


Personally, it's just interesting to see the original creator start such a port. Especially when the original creator is of the calibur that Carmark is.


Oh, no doubt.

I guess I'm just prejudiced by seeing "case stories" held up as some sort of "see, this person was able to do it, language/technique/tool/whatever X is ready for everyone to use! And will solve all problems!"

So, yeah, I'm projecting. No, I don't know why. :( Sorry.


It's healthy to be skeptical. But it's unhealthier not to try everything and discover what works. It's pretty rare for someone to immediately see which of several possible new inventions might be better (and why they're better). I'm referring to the inventor -- an inventor often has several possibilities for new things he could try to do, and he has to choose which of those to pursue. It's not at all obvious which path to choose. So most of the time, you just try all possibilities as fast as possible, letting intuition guide you. For example, it probably wasn't immediately obvious to DHH that it was a win to write websites in Ruby until he'd tried writing websites in Ruby to see how it'd turn out. History tends to get rewritten so that it's obvious in hindsight that it would work. But it wasn't at all obvious until it was suddenly obvious to him, which was only after he'd had the experience of trying it.

So the interesting aspect here is that Carmack's intuition has told him that there might be something worthwhile for the videogame industry to look into functional programming languages. It probably won't pan out. But if there's any way it can, then Carmack will find out how to make it a pragmatic way for studios to build large codebases.


I am currently reading through the phd thesis of the project: https://casanova.codeplex.com/ A domain specific language for games based on f#. Looks interesting and they have many examples a fully fledged rts included. The thesis is buried in the sourcecode https://casanova.codeplex.com/SourceControl/changeset/view/2...


Not just Carmack, Tim Sweeney (Epic Games) has held a presentation at POPL discussing the value of functional programming for games: http://www.cs.princeton.edu/~dpw/popl/06/Tim-POPL.ppt


In the game industry such stories are important. Unless you are either a celebrity or a unknown indie you have to use such a story to back up every decision you make. For example, before id Software's DOOM nobody would write a game in C just because no high profile game had been written in C.

Of course, Wolf 3D won't become a case but, if it goes okay, it could be a step to a case for Haskell eventually.


I guess my entire point is that this will definitely lead to more questions. It will likely not lead to any direct answers. I welcome the questions, I have grown weary of premature answers.


Wolf 3D was written in C for the most part.


A bit off-topic: has John Carmack made any comments on Go programming language ? It sounds like a language that should appeal to him. The name makes it not easy to google for (I've tried golang, etc). I couldn't find anything, is it because Go has a far way to go before it's mature enough to use in games ?


Considering the language features that Carmack is interested in, Rust seems like a better candidate than Go for him. Unfortunately, Rust isn't even in beta yet, so it will be a while before it's suitable for anything except experimental projects.

The D language has a lot of features that should appeal to large software projects, including a certain amount of feature overlap with the functional languages. It hasn't really taken off yet, but it could be a killer app or two away from taking off. Maybe.

I say all that to say that Go seems fantastic for server-side programs but I suspect other prospective languages will be better for large, performance-critical games, but I've been wrong before.


As of right now, Go is essentially completely uninteresting to game dev because it's not possible to guarantee steady frame rates in it. (The GC solution is bad for latency.)

Also, I doubt Carmack would be interested in Go -- based on his previous talks, he seems to be going for a more functional approach. Go is imperative to the bone.


Well I would agree if we are talking about very demanding current gen style games (which id software indeed typically produce) but if we look at indie-style games then having garbage collection doesn't preclude a language from being used in games development, or having steady frame rates.

There's no technical reason I can think of why Go wouldn't work just as well as C# (XNA/XBLA/MonoTouch/Android) or Java (Minecraft etc) for game development.

Garbage Collector sweeps can indeed be a performance problem but there are obvious ways of minimizing it's impact during gameplay.

Overall I think (read guess) that unless you are doing some sort of physics simulation, typical game logic requires relatively little cpu power and for the graphics there is hardware acceleration doing the heavy lifting.


Go will likely eventually have a GC that will not be that terrible. However, right now it's a simple parallel mark & sweep. If you allocate more than a 1000 objects, and do any allocation during simulation, there will be very noticeable GC pauses. The current Go GC is not anywhere near the capability of C# or Java GC, and would not be suitable for minecraft-level games.

> Overall I think (read guess) that unless you are doing some sort of physics simulation, typical game logic requires relatively little cpu power and for the graphics there is hardware acceleration doing the heavy lifting.

Well, it of course depends on the game. AAA games spend all the CPU budget allocated to them.


I realize that it doesn't compare to Java's GC (which is likely state-of-the-art) but is it really terrible?

Granted I haven't used Go myself (leaning towards Rust) but I've read that Go is already being employed in production scenarios with good results.


It's perfectly passable for web serving and other such high-latency tasks. In a GC, the goals of throughput and latency are diametrically opposed -- optimizing for one makes the other worse. The present simplistic GC is a typical throughput-oriented design.


I just remembered these (I knew I'd seen some game examples in Go):

http://www.youtube.com/watch?v=BMRlY9dFVLg http://www.youtube.com/watch?v=iMMbf6SRb9Q

which are game examples as part of a game engine framework written in Go, seems to move smoothly with lots of objects including 2d physics.


Out of curiosity, what makes you say that Go would appeal to Carmack? Is it just because it is hyped as a "systems" language. For the way (some of) the Go authors define "systems", Java would do an equally (if not a better) job at that.

Carmack seems to be interested in correctness and safety, neither for which Go has any significant offerings. In fact, it seems to be a step back for a "modern" language -- it has null pointers, no way to explicitly enforce immutability, no generics, and a crude and verbose way of handling errors. A panic() call in the code brings down the entire program.


Probably because there are like a billion other things he has to do.


Can someone tell me what exactly a 'port' is? Is it just rewriting all the code base in a target language?


Porting is to make something that runs in a platform/environment run in another one.

Not necessarily having to do with coding in another language. Sometimes it's just a recompilation, or providing a new hardware abstraction layer (HAL). But sometimes it can lead to major re-development of the whole thing, be it using the same language, or a different one.


Rewriting the code base in another language would be considered a 'rewrite'. Usually, ports are in the same language but targeted at a different platform whether that be an OS or architecture, or even something like a different graphics library (OpenGL game ported to DirectX).


Referring to total rewrites in different languages as "ports" has a history in the game development world going back to the beginning, when those doing "ports" often did not even get access to original source or assets, and the games would be written in asm for the target platform anyway.


My heart goes out to the poor Redux project maintainer. Can just imagine him a few months from now checking on his old email address that he's forgotten about and finding John's invite. Friendly reminder to everyone to keep your project contact info current.


This is encouraging, have been working on my game in Haskell for last few weeks, it is fun. Hope to see more insight & progress from John soon.


So how do you keep state like health points in a functional language? :)


One way you could do it is by using mutable state. Impure languages, such as Scheme or Ocaml, let you use mutable like normal imperative languages do and in Haskell you can use mutable references inside some special monads (and you even ave more than one kind of mutalbe reference, depending on what monad you are on: IORef, STRef, etc)

The other way is to not use state. If you squint a bit, you can see that if you explicitly encode your "state" as function arguments you can kind of update it by calling the function recursively:

   go 0 acc = acc
   go n acc = go (n-1) (n * acc)

   fact n = go n 1
In the previous example, the code I wrote does exactly the same thing as the usual imperative loop might have done, but the accumulator is a parameter on the helper function instead of being a mutable variable.

In general, if you are OK with this kind of non-destructive updates that I used here, you can encode all your state as extra parameters that you thread around your functions. You can do this by hand in most cases but in some situations the state is very pervasive and correctly threading it around can be complex and error prone. In that case, you can look into use things like the State monad (not to be confused with the ST monad!) to do that implicitly pass that parameter around for you.


But for a game your state needs to update at specific time intervals, otherwise you walk too slow or too fast. How can that ever be purely expressed?

Personally, I would love to see more functional possibilities in imperative languages, but have regular imperative possibilities as well. E.g. running the game and keeping state seems pretty suitable for imperative code. But doing certain updates or calculations could be expressed better with functional code.


> But for a game your state needs to update at specific time intervals, otherwise you walk too slow or too fast. How can that ever be purely expressed?

In this case you want ot have a system that reacts to an event that fires on a regular interval (as well as other sorts of input events). If you search for Functional Rdeactive Programming you will find some example libraries out there that try to do this in a pure manner (although I would personally have to say that this is all still a bit on the experimental side of things).

That said, Haskell still lets you do things the imperative way if you want! All you need to do is put the impure code in the IO monad, where it belongs.

You are only forced to be purely functional if you want to or if whoever is calling you must be a pure function. So basically, the idea is that your `main` function is impure code in the IO monad and it can call either more impure code or pure "helper" functions. Increasing the percentage of your code that is pure is a nice thing but its not mandatory.


>How can that ever be purely expressed?

A loop is a recursive function. State is the arguments to the function. You pass an updated state to the next iteration of the loop. Haskell provides nice abstractions to make this seamless.


"Sufficiently advanced trolling is indistinguishable from ignorance." I'm leaning slightly towards troll due to the smiley face. But I find myself forced to reply anyway, because such a good tutorial came out recently that covers doing exactly what this question asked about. http://www.haskellforall.com/2013/05/program-imperatively-us...


No, it was not a troll. The thing is, I sort of know the answer, but it never fully reaches my inner understanding, and at the same time I'm interested in it, so I just asked the core issue of what I'm always wondering in the most simple form... Maybe I should just actually try it with a functional language, but never get around to do it.


Health points isn't a state, it's a piece of data that exists at a given time. If you have time, go on YouTube and watch some talks Rich Hickey gives about state, he breaks it down far more eloquently than I can.


From what I hear (I don't use functional languages, so I don't know for sure), functional languages minimize state, but don't erase it entirely.


You can do something like:

  gameLoop curState = do
    inputs <- getInputs
    let newState = gameIteration curState inputs
    drawState newState
    gameLoop newState



Any word on a Quake port to iOS?


it's gonna be another perl 6.


John Carmack is actually pretty good at writing Wolfenstein 3D.



TBH, this seems more like porting perl 1 ( https://github.com/TPF-Community-Advocacy/perl1.0 ) to Haskell. :-)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: