As someone who has worked extensively in the PC games industry (programming lead on Warcraft, Warcraft II, Diablo, StarCraft, Battle.net, Guild Wars) the reason is quite straightforward: that's where all the users who buy games are. During my time at Blizzard the Mac versions of our titles sold ~4% of the PC numbers, though I expect those numbers have changed with the rise of Apple and, to a much lesser extent so far, desktop Linux.
Even with the rise of Linux -- and my belief is that it will rise very rapidly with the advent of Steam for Linux -- PC game developers have years and years of development experience advantage on the Windows platform.
When I and my two co-founders (all of us programmers) started ArenaNet to build Guild Wars in 2000 we considered *nix vs. Windows for the server platform and decided that we'd be more productive continuing to build on Windows based on our previous programming background. All of us had extensive experience with low-level coding on Windows (device drivers, async coding with IOCompletion Ports) and knew that it would take time to replicate that expertise on another platform.
Beyond the learning curve we knew it would be more convenient to be able to run the game server and game client on the same computer. During development I ran two copies of the Guild Wars match-making server, and two copies of the game client, on a single desktop box to test the matching algorithm code. Having to manage that process across multiple computers would have been more of a hassle.
On a personal note, I've been spending a lot of time working on building/using Linux in a virtualized environment (https://github.com/webcoyote/linux-vm). Linux is an awesome development environment! While I much prefer doing C/C++ development using Visual Studio, Linux is better for other programming languages I use or have tried: Ruby, Python, Go, D, and Erlang. And the ecosystem, with projects like Redis, ZooKeeper, Postgres and a slew of NoSQL servers makes it incredibly powerful because it isn't necessary to write everything from scratch (except SQL), as we did for Guild Wars.
Thanks for your insights, Patrick. I've heard nothing but good things about working at ArenaNet. Nobody has every mentioned whether that's due to your guiding hand or the fact that you left
Being a very experienced game developer who tried to switch to Linux, I have posted about this before (and gotten flamed heavily by reactionary Linux people).
The main reason is that debugging is terrible on Linux. gdb is just bad to use, and all these IDEs that try to interface with gdb to "improve" it do it badly (mainly because gdb itself is not good at being interfaced with). Someone needs to nuke this site from orbit and build a new debugger from scratch, and provide a library-style API that IDEs can use to inspect executables in rich and subtle ways.
Productivity is crucial. If the lack of a reasonable debugging environment costs me even 5% of my productivity, that is too much, because games take so much work to make. At the end of a project, I just don't have 5% effort left any more. It requires everything. (But the current Linux situation is way more than a 5% productivity drain. I don't know exactly what it is, but if I were to guess, I would say it is something like 20%.)
That said, Windows / Visual Studio is, itself, not particularly great. There are lots of problems, and if someone who really understood what large-program developers really care about were to step in and develop a new system on Linux, it could be really appealing. But the problem is that this is largely about (a) user experience, and (b) getting a large number of serious technical details bang-on correct, both of which are weak spots of the open-source community.
Secondary reasons are all the flakiness and instability of the operating system generally. Every time I try to install a popular, supposedly-stable Linux distribution (e.g. an Ubuntu long-term support distro), I have basic problems with wifi, or audio, or whatever. Audio on Linux is terrible (!!!!!!), but is very important for games. I need my network to work, always. etc, etc. On Windows these things are not a problem.
OpenGL / Direct3D used to be an issue, but now this is sort of a red herring, and I think the answers in the linked thread about graphics APIs are mostly a diversion. If you are doing a modern game engine and want to launch on Windows, Mac, iOS, and next-generation consoles, you are going to be implementing both Direct3D and OpenGL, most likely. So it wouldn't be too big a deal to develop primarily on an OpenGL-based platform, if that platform were conducive to game development in other ways.
I would be very happy to switch to an open-source operating system. I really dislike what Microsoft does, especially what they are doing now with Windows 8. But today, the cost of switching to Linux is too high. I have a lot of things to do with the number of years of life I have remaining, and I can't afford to cut 20% off the number of years in my life.
I'm a linux developer who's had an internship at a game company a
few years ago. I had the opposite experience.
During my internship, there were problems with visual studio.
Intelisense made everything so slow we had to disable it. VS was
pretty slow and was hard to use without being fullscreen on a big
monitor.
I use emacs and the gdb intergration is excellent. It's never
slowed me down. I've customized it and find it painful to use
another debugger (or an unconfigured gdb) so please don't have it
nuked. When I develop under windows (without cygwin), I miss many
tools I use when programming: valgrind, objdump, grep, sed. I
also miss having more control over the compiler and
linker. Makefiles seem to be much more powerful and flexible
although more complicated then most IDE's projects. SVN and git
are more complicated to use without the command line.
The game I worked on (deus ex human revolution) was being
developed for PC, xbox360 and ps3. Developping for ps3 means that
the code goes through gcc. This caused a few headaches for my
colleagues as it is more rigorous with standards (new line at end
of file gives a warning, templates need to have class or
typename, etc). I was used to it and actually like the fact that
it respects standards.
I've had to install windows 7 on a computer recently and was
baffled by the fact the realtek ethernet, USB3, video drivers
weren't included in the base windows 7 CD. Had I used the
manufacturer's recovery option it might have helped but my
computer would have been loaded with crapware. Ubuntu longterm
support works like a charm on that computer.
From what I've written so far you might conclude that I'm a FOSS
fanboy and dismiss everything windows has to offer. That's not
it, I'm used to my tools and everything else seems slightly
quirky to me. My tools aren't terrible and don't need to be
nuked.
Most game developers have had many years experience developing
for windows and use windows as their primary
platform. Programming for linux is different, strange and
possibly frustrating. I don't think the biggest issue with linux
game developement is the quality of the tools available under
linux but the fact that most game developers are used to using
tools only available for windows.
The one thing I think is lacking for linux to be a viable game
platform is quality graphics drivers but this seems to be
improving with Valve's recent efforts.
TLDR: Using windows is just as quirky as using linux. The biggest
difference is that you are used to your quirks as I am to
mine. Most game developers are used to windows and switching to
linux is hard and uncomfortable.
GCC doesn't "respect standards"; it has lots of own features and extensions that will make you stumble when actually writing code for a fully compliant compiler (not that there is one). You can still have makefiles on Windows; it uses the same toolchain setup internally that we are so used from the Unix world. And frankly, the debuggers on Windows are just completely.. ahead.
The Visual Studio debugger can step from managed code, into C++/CLI semi managed code, into native code. It automatically downloads symbols for all system APIs you are ever going to encounter, providing symbols for internal functions, too. It has expression evaluation on all these layers. Theres also the standalone WinDbg debugger that most importantly does kernel mode debugging but is designed similarly to gdb.
Also, of course - edit and continue. The way that GDB is decoupled from the compiler means that they probably won't ever be able to do that without major pains. Its a great design decision for when you need to debug unknown or hostile programs, but thats not the use case for most developers.
What I meant by "respect standards" was things like giving a warning if there is no new line at the end of a file, not accepting templates unless they are in the form "template <typename TYPE>" (the VS compiler lets you omit typename). I'm not sure which extensions you're talking about but every extension in GCC I know of is marked as such with a prefix. I think you'd find it hard to argue that the VS compiler is more standard compliant than GCC (or any other for that matter).
I know that you can have most of the GNU toolchain available under windows using cygwin. I use it quite a bit when I work under windows. I assume that the average windows game developer doesn't use these or he would find developing under linux very familiar (which hasn't been my experience while working).
The VS debugger features you mention seem to be compiler features rather than debugging features.
Now, if you're arguing over these details I'm afraid you missed my point: game developers aren't having a hard time developing games for linux because gdb doesn't support stepping from managed code into semi managed code. They are having a hard time because they aren't used to the tools available in linux. I wasn't saying the tools under windows are bad, rather that, as a linux developer, they can be as hard to use as my tools are to a windows developer.
I've spent the last seven years working on video game middleware and have had plenty of experience to many development environments. I've spent years developing for Windows, Linux, and Mac OS, and worked on every console that has been available on the market in that time.
GDB is different, but it's different in a bad way. Jonathan is completely correct in that it's a much worse experience because it doesn't easily give you the information you need to consume in order to find the problem you're looking to solve. Using the text user interface helps considerably but it still gets in the way. Sure, seeing source is fine, but what if you want disassembly too? Mixed source/assembly? Register view? Watches? Switching between threads easily? Viewing memory? DDD is an improvement but it still feels like it's 8 years behind. I'm glad to hear you're productive in that environment, most people (especially most game developers) aren't, and based on my experience getting people up to speed and fixing problems it's easier to get them going on Windows than on Linux.
That said, it's not the worst though. Green Hills takes that crown by a country mile.
The SN Systems debugger for PS3 is the best one I've used, bar none. It gives you all the data you need, is responsive, and has a few really nice features (write memory region to disk).
Not quite sure why you used gcc for PS3, SNC is far superior. It's faster at compiling and generates both faster and smaller code.
>Sure, seeing source is fine, but what if you want >disassembly too? Mixed source/assembly? Register view? >Watches? Switching between threads easily? Viewing memory?
Disassembly: (gdb) layout asm
Mixed source/assembly: (gdb) layout split
Register view: (gdb) layout regs
Watches: There is no window, which is too bad. You can say "display foo" and it will print the value of foo each time the execution breaks. You can also say just "display" and it will print the values of all variables you currently have set to display.
Switching between threads: Say "info threads" to show all the threads in your application. Say "thread x" to switch to thread number x.
Viewing memory: Again, there is no window, which is too bad. You can, however, use the "x" command, which is pretty full-featured and allows you to print the contents of a given-size chunk of memory, starting at a given point to the console in pretty much any format you might desire.
You may already be aware of these features, but be discounting for some reason?
Edit:
Turns out, GDB also supports writing memory regions to disk:
Can you share some of your more generally useful gdb settings / macros?
I've been working with linux/gdb for years and while I do agree it is powerful -- I don't find it very convenient.
For example, up until gdb 7.4 (pretty recent) there was no way to automatically skip certain functions -- the 'skip' command added to gdb 7.4 is almost a must for debugging heavy C++ code (and you still cannot add it to your .gdbrc file...)
Thanks for giving your insight, I found this post really interesting. Without going too far off topic, I was just curious, what kind of work did they have you do there? It just seems like the game industry doesn't get a lot of interns due to the complexity, cost, and time constraints of making major game like Deus Ex Human Revolution.
As an intern I was basically treated as a junior programmer. I had a senior programmer mentor me and usually ended up doing little, usually week long, tasks for him. He was a great guy and really made this enjoyable.
His main task was the cover system. I helped him with this as an introduction to the game engine. Once I had a grasp of the existing code and demonstrated that I was competent, he offloaded some of his simpler tasks on me. This usually involved modifying the game editor for the artists.
The modification I remember most clearly was enabling the artists to control the camera's FOV and the depth of field effect. The game engine didn't support this so I had to use a hidden bone in the models to control these values.
Working at Eidos Montreal as an intern was a great experience. It enabled me to work in video games (childhood dream) before realizing that it wasn't quite for me (I'm now trying my hand at entrepreneurship, it's all the rage).
> Someone needs to nuke this site from orbit and build a new debugger from scratch, and provide an library-style API that IDEs can use to inspect executables in rich and subtle ways.
LLDB[1] can do this, but I'm not sure how far along it is.
I use LLDB with some regularity. The last I checked, lldb is essentially at feature parity with gdb in user space (not sure about kernel debugging), and has additional features beyond that aimed at fixing some of the mistakes of gdb.
As a small example of something LLDB got right, say I want to put a breakpoint in this C++ template function:
template<int X>
factorial() { return X * factorial<X-1>; }
Because it is a template, this one line in code maps to multiple program locations (factorial<1>, factorial<2>...) gdb will create a separate breakpoint for each location, and each must be managed separately. But lldb maintains a logical / physical breakpoint separation: one logical breakpoint maps to multiple breakpoint locations, so you can manage all of them as a single value, and more easily treat factorial as if were a single function.
(Maybe more recent gdbs do this too - my version is rather old.)
One downside of lldb is that, while its command line syntax is very regular, it's also quite verbose. gdb's 'attach myproc' becomes `process attach --name myproc`. gdb's `up` becomes `frame select --relative=1`.
lldb does have a set of macros and abbreviations that cut down on some of the verbosity, but they haven't always worked well for me, and I hope they overhaul the syntax.
Judging by the version that's in Xcode, lldb is not ready yet.
The command syntax is a bit inconvenient compared to gdb. Perhaps I should overlook that, because it's presumably designed to be the lowest level of the debugger, so you interact with a GUI or something rather than with lldb directly, but Xcode's expression window is such utter unreliable junk that I end up having to use lldb by hand all the time anyway.
I've had problems with variable substitution. You can (say) ask it the value of $rip to find out what the value of the RIP register is, but if you try to set a breakpoint at $rip, it doesn't know what $rip is.
I've had problems with lldb telling me that a struct is forward-declared, even though I'm trying to print a local value of that struct type.
Sometimes lldb just gets confused, as in this exchange, which I reproduce verbatim, just as it happened:
(lldb) p this
(const CustomTrafficStyle *) $2 = 0x200cf4e0
(lldb) p *this
error: invalid use of 'this' outside of a nonstatic member function
error: 1 errors parsing expression
lldb doesn't have gdb's @ operator for printing arrays; instead, there's this separate command for dumping memory. So instead of "p x[0]@100", you might do "me r -ff -c100 `x`" - which is obviously a big pain, because it's a lot more verbose. I don't even know offhand how you'd use an expression for the count argument either (more inconvenient syntax.) (Furthermore, I don't even believe the me command does exactly the same thing, because I don't think it prints structs, but it usually does the job just about well enough.)
Finally, and most tiresomely, lldb will sometimes just print nonsense. So you might end up with something like this, which is all reasonable enough: (made-up example representative output, not a copy and paste)
(lldb) p x
(int *) $2 = 0x1234578
(lldb) p i
(int) $3 = 100
(lldb) p x[100]
(int) $4 = 123
(lldb) p &x[100]
(int *) $5 = 0x12345998
But then...
(lldb) p x[i]
(int) $6 = 0
(lldb) p &x[i]
(int *) $7 = 0xc4231777
As you hint at, Xcode is a part of the problem here. The debugging parts of Xcode (ignoring the horrible things wrong with Xcode as a whole) have always been a pile of junk, even before they introduced lldb. This is complicated by the fact that in my experience Xcode was always built upon a quirky Apple fork of outdated third-party tools.
So, I guess what I'm saying is: Even if lldb isn't ready yet (it probably isn't), you shouldn't judge it based on your experience with Xcode. Being integrated with Xcode would make any piece of software look like junk even if Knuth wrote it.
It's not very well known outside of Redmond, but windbg (provided as part of "Debugging Tools for Windows") is an excellent debugger. Steep learning curve, as the commands are cryptic to the uninitiated, but once you get the hang of it it's great. Visual Studio's debugger is absolute crap by comparison.
Thanks Jonathan for telling it like it is (and enduring the flames). As much as I hate Windows and like Linux, I agree on all your points. (Audio sucks ass on Linux!)
gcc/gdb tools suck because Stallman wants it this way. See Chandler Carruth's talk "Clang: Defending C++ from Murphy's Million Monkeys". Apple wanted to make gcc/gdb more modular, but Stallman doesn't want gcc/gdb to be modular and allow better tools because that would allow those tools to be non-GPL. So Apple spearheaded LLVM to ditch gcc.
LLVM/LLDB might be the way to improvements, but you are also right that the skills in building these higher level tools are not the strength of the current open source community.
So I don't end with a completely negative post on Linux development, Valgrind rocks!
The only good thing about GDB is that it can execute almost arbitrary C code in the frames. Otherwise it suffers from the same problems as GCC in that people decided that being able to embed GDB had to be as complicated as possible and it shows.
GDB only presents the GDB-MI (machine interface) textual interface as an "API". LLDB actually ships a library that you can link against. When it comes to integration, LLDB is miles ahead of GDB.
Well, it wasn't deliberate.
GDB is just so old that embedded it was never a serious concern when it was built.
At some point, it was decided the sanest way to make it "embeddedable" would be the GDB/MI interface, rather than a library.
The cost of Microsoft being unfriendly to game devs on Windows 8 is likely to be the cause of other platforms getting better, NOT people running in droves to Win8.
On even days I really wish they'd get this. On odd days I hope they never figure it out.
Linux sound has randomly become much better for the last year, including usb headphones. Don't know if its randomly, or if pulseaudio has finally started get some order to thing.
There is however a different way to look at the issue of sound support. Problems in linux tend to only stay unsolved when too few people care about fixing it. The few attempts made like alsa and pulseadio has been constantly criticized to have bad designs. One could then truly wonder why we don't see proof of concept design that is superior.
Perhaps the random factor is which devices you have tried. I haven't had a problem with this for literally years, though I don't dispute that some people have problems, this is because of the state of the drivers for the specific device.
When Microsoft owns the vast majority of the desktop market forever, and vendors do not make it a priority to support Linux, it shouldn't be a surprise that new hardware is being sold which is not well supported on Linux.
But Microsoft won't own the entire desktop market forever. If anything Linux is going to dominate that on the low-mid-range when Android and Chrome OS are merged and Ubuntu on the high end for workstations and gamers.
The main reason is that debugging is terrible on Linux. gdb is just bad to use
I am a cross platform developer. I have spent years developing on Windows, Linux, and OSX. (Also on other OSs, but what ever.)
Debugging with gdb is very different from debugging with a GUI IDE like Visual Studio. And if you are coming form Visual Studio to gdb, then debugging with gdb might seem down right impossible.
However, if debugging with gdb is so terrible, that means that each and every UNIX and all the software which runs on them were developed with this almost impossible to use debugger.
And Linus and the entire Linux dev. community, who rolled out Git almost on a whim, are stuck with gdb and can't come up with anything better.
Can that be true? Or is gdb in fact a great debugger, but with a very steep learning curve?
In my not so humble opinion, gdb is much harder to intuitively pick up than Visual Studio. But if do take the time to learn it, it is just as productive.
Actually in my experience, it is slightly more productive, because it forces you to work slower and think more. This go slower and think more strategy has also made my Visual Studio debugging better. Although if I spend too much time in Visual Studio I can slip into a pattern of less thinking and more break point setting.
tl;dr:
It's good that you are a very experienced game developer, but being a single platform developer kind of sucks. Become the kind of developer who doesn't care what platform they get to develop on.
Also, re this: "but being a single platform developer kind of sucks. Become the kind of developer who doesn't care what platform they get to develop on".
I agree with this in principle. But in practice... I invite you to develop large and complicated projects (like non-small games) and see if you retain this opinion. I find that work environment matters, a lot.
The thing that's a little sad is that developing on Linux could be great, if only open source developers had a culture of working a little harder and taking their projects past the 'basically working' stage and toward hardened completion. When things are solid and just magically work without you having to figure out how to install them or what config file to tweak, it drastically reduces friction on your productivity. So there's a productivity multiplier that open source developers are not getting, thus making all their work harder; because hardly anyone works on the basic tools, everyone else's development is more painful and takes longer, but nobody realizes this because everyone is used to it.
If someone made a debugger that let you inspect/understand programs 2x as efficiently as gdb does (I think this is not even an ambitious goal), how much would that help every open source developer? How much more powerful and robust and effective would all open source software become? (At least all software written in the languages addressed by said debugger...)
loved the first answer. I still remember the opengl 2.0 debates.
what puzzles me is the little mention of the horrible state of video, and audio in linux. alsa was an attempt to "fix" the audio sub(eco)system. the broken audio is probably one of the reasons why theres still no proper audio apps in linux.
video? well if you have a recent nvidia card on a desktop system you can at least play games. but that's about it. kernel developers make sure the closed source blobs don't use too many gpl features. nvidia blames it on the kernel devs, the kernel devs blame it on nvidia.
everyone talks about how awesome the opensource intel drivers are, and even though they got a lot faster recently with all the valve feedback, they were an order of magnitude slower than the windows drivers(i don't know if that's still the case). and mesa? well, you judge yourself.
I completely agree on your whole developer ecosystem. To me that is the sole reason why mac osx desktop utility apps strived while linux' more or les stagnated(no data just feeling, and maybe that's a bit harsh, but compared to osx apps that's certainly the case).
I don't know how anecdotal that is, but I've heard about pro audio users downright loving JACK [0] (not an audio app, but they must be using this toolkit for something).
Personally I like PulseAudio which has worked just fine for quite some time here on Arch Linux, while it always seemed to be a bloody mess of an integration on Ubuntu.
Jonathan, thanks for the post. Did you do the Braid Linux version yourself by the way, or did you get some external help in order to produce it ? From there on, based on your sentiment, does this mean that your upcoming games will not support Linux, or that you will let someone else take care of it ?
I recently asked Jon about getting Braid on steam and he said it being listed as Linux compatible was an error. So I think someone at Humble Bundle did the port for him.
>Someone needs to nuke this site from orbit and build a new debugger from scratch, and provide an library-style API that IDEs can use to inspect executables in rich and subtle ways.
It's a really stupid question (but this does not take away from the value of the incredible first answer). It makes a huge, blind assumption (running on Windows means using DirectX and explicitly not OpenGL) and ends up assuming correlation implies causation.
There are many Windows-only games written in OpenGL, and the ones that are written with DirectX is not necessarily because DirectX is better but because there are more consumers for Windows.
The short and simple answer is that "real," modern games are ridiculously expensive projects. The highest percentage of PC-gamers can be found (at the moment) on Windows. Ergo, you will write Windows games and use libraries (OpenGL, DirectX, or otherwise) that are available on that platform.
I wouldn't say it's a stupid question. It's a broad question seeking a better understanding that most of us take for granted. And it clearly was treated as a good question by the absolutely astonishing answer it garnered.
Humanity is better for this exchange on Stackoverflow - we need more discussions like this.
"Humanity is better for this exchange on Stackoverflow - we need more discussions like this."
Outstanding. Made me smile. I appreciate the level of grandeur you are attempting to bring to bear on this discussion, but this is a discussion about Game Development by the way.
The "amazing" part of the OP's discussion is not the immediate topic at hand, but that someone took the time to go past the pat answers ("because Windows has more users, etc") and consider all the other factors that have led to things being the way they are. If only more important fields -- education, medicine, politics, justice -- had more people willing to spend a summer hour examining their deeply-held assumptions.
It was a dumb question, but if you scroll down, someone responded with an awesome history of the war between Direct X and openGL. I'm pretty sure that's why it was posted here.
As someone who's never done any game development at all, can you give me some sense of the work that would be involved to port a Windows-only OpenGL game to Mac or Linux? What pieces of it would need to be redone or significantly overhauled?
I would have expected it would be quite minimal compared to the work involved in developing the game itself, but I guess I must be wrong if there are "many" Windows-only OpenGL games.
Sound. Networking. Decent async file I/O. Threading libraries. Dynamic loading of components. Anti-cheat techniques (or any other security stuff you might need). Device input. Choosing and supporting a platform (and the follow-on customer support issues). Diagnosis of customer problems (because they _will_ have them).
Sound: OpenAL abstracts this decently. If you're not using it or an equivalent, you're doing it wrong.
Networking: Most folks building a bespoke networking layer seem to go with ENET, which is cross-platform.
Async file I/O: yup.
Threading libraries: something wrong with Boost Thread? (Boost gets coroutines, too, in 1.53, which I'm pretty excited about.)
Honestly, the biggest problem I see in writing a cross-platform game is device input. Trying to build a joystick/gamepad-based game for Linux and OS X are pretty painful. Fortunately I've lucked onto GLFW as a windowing toolkit, which handles the HID Manager boilerplate and other badness for them.
You can also just use the sdl, which provides abstractions for network, sound, graphics, and input. If you hit snags, you could just commit back the fixes to sdl proper. I've used it a bit and I have had good experiences so far with successful cross platform builds.
It also supports Android, so in theory you could almost have one code base with only a UI and input revamp for Android target every PC os.
SDL is a thing, I guess, but I personally have little use for it. I mean, I've tried it, of course; I'm sure most people have. But, and maybe it's just me, but if I'm going to have to do the legwork of wrapping some C API into a decent, lifecycle-manageable object model, I'm probably going to use the lowest-level one I can so I can state with authority that I fully understand what everything in my stack is doing at all times. Like, OpenGL's C API is super gross, but only very slightly less so than dealing with SDL. And while, sure, "you can commit the fixes back to SDL," you first then have to go unwind their abstraction to actually understand what the hell it's doing--which for me, given the relatively small feature set I need, is almost as much work, if not more, than writing it yourself.
Also, SDL 1.x is LGPL (instant dealbreaker, I build with static libraries on a Mac to avoid dylib hell) and SDL 2.x isn't done. So there's that, too.
In the spirit of abstraction and not having to build the object model I'd get for free with Direct3D, I used SFML for a while; the general idea of it is promising, but the abstraction is janky, it's desktop-only, and I still have a bitter taste in my mouth from the way they handled showstopper bugs in SFML 1.x as far back as three years ago: "Completely reproducible crashes on all modern ATI cards? Just upgrade to our incompatible 2.x alpha!". (2.0 still isn't out, which makes it super groovy.)
.
I now use GLFW for windowing/input, ENET for networking, and PhysFS for I/O; I can pretty easily use each of them in a way that maps pretty well to my own preferences for an object model and a mental model. Probably not scalable (for a multi-person project or a 3D game I'd almost certainly use OGRE rather than doing it myself), but nice for my purposes.
The only part of my codebase that isn't pretty trivially platform-independent right now, without any special effort, is in "platform.cpp", which does the tremendously difficult work of finding me the user directory and the appdata directory. =) I sure haven't done anything special to make it build both in Xcode (my primary environment) or Visual Studio.
OTOH, SDL2 is developed by Sam Lantinga, currently an employee at Valve (IIRC), so this is probably the most promising library of the three you mentioned.
For one, it [SDL2] uses XInput2 for relative mouse input on X11 (and Raw Input on Win32), which is soooo much better than the classic WarpPointer mess.
Another thing, it is really preferable to put libSDL shared library next to your executable, not compile it statically, at least for Linux builds. Quite a number of problems with old games running improperly on erm...modern Linux desktops are solved by swapping the bundled libSDL for distro-provided version.
I'm speaking specifically about windowing/input part of SDL, have zero experience with other parts.
The main difficulty in porting has less to do with OpenGL itself and more to do with the number of platform-specific issues you encounter when you're trying to port code that is highly optimized for specific platforms.
They're almost all program and language dependent. If your game doesn't use threads, you don't have to port anything to do with threads. If it does, but you use Java or C++11 and their platform independent thread libraries, you (ideally) don't have to port anything to do with threads. But maybe you used C or C++98 and now you have to run around putting in ifdefs for CreateThread vs. pthread_create and debugging subtle differences between Win32 and POSIX threads, etc.
Or maybe you decided to write the whole thing using multi-platform libraries like Qt in the first place, so that you don't have to change hardly anything. Or maybe you wrote it in Visual Studio using .NET heavily and you have to practically rewrite everything from scratch. It's totally dependent on the decisions you made when writing the original program.
In addition to what AnthonyMouse said, it also gets a bit more complicated than that. If you have a high performance game engine, it's very likely that the inner loops are written in assembly to squeeze every last drop of performance out. The guys who write this code are optimizing for the OS's memory handling behaviors, aligning data to fit into cache lines, using SSE instructions for faster matrix multiplications, and all sorts of other exotic techniques that "mere mortals" don't even think about.
So when you change platforms from, say, the Xbox 360 to the PS3, those assumptions you made about the OS, the CPU, the chipset, and other minute details are suddenly completely, horribly wrong. So the guy who wrote all of that highly optimized code has to write it again to optimize against a completely different set of quirks.
I don't see him making that assumption. It's the other way around: using DX means Windows, which is correct. Even so, he is asking if this could be the reason and I don't see how this is a stupid question.
So are indie games, notgames, and Minecraft all not real for you? Reserving the term "modern games" to only "ridiculously expensive projects" and dubbing the rest "not real" is... just one point of view.
What's interesting to me is the fact that makers of indie and notgames, that do not require the enormous scale and pushing the hardware and programmers to their limits - they often make them for Windows, too!
The highest percentage of PC-gamers can be found (at the moment) on Windows. Ergo, you will write Windows games and use libraries
Indie game developers want to reach the largest audience as well, so I don't see the difference. Additionally, the fact that Windows has such a high marketshare among consumers means that it likely has a high marketshare among indie game developers from the get go.
Because OpenGL is ugly as hell. DirectX is rewritten often (with new versions) to accommodate new ideas, whereas OpenGL new features/ideas are just "tacked" onto the existing code base.
OpenGL also has a huge barrier to entry compared to DirectX.
MSND DirectX resources are fantastic, whereas resources on OpenGL are often out-dated and generally pretty crap.
DirectX requires the DirectX SDK, whereas OpenGL requires GLUT, or GLEW? I think. Perhaps FreeGLUT? OpenGLUT? Or can you just use SDL? Or none of them? What? Exactly.
I wouldn't say OpenGL is ugly as hell. Modern OpenGL (using vertex buffers and such) isn't too bad. But politics, platform-compatibility and corporate hating aside, Direct3D is the better API.
OpenGL suffers from a lot of the same issues as HTML -- designed by a committee of mostly competitors. It should surprise no one that while systems that come out of such arrangements can be vital for the industry, they are very slow moving when compared to an SDK defined by a dictator (benevolent or otherwise) and much more hesitant to allow for the massive changes that new technology would otherwise allow.
I've heard this same sentiment from numerous programmers, including many who prefer Linux. I'm thinking this is probably an informed opinion, not an uninformed one, and should probably not be in the negative in votes.
Frankly, complaining that developing for Direct3D requires just the DirectX SDK, and that developing for OpenGL requires some combination of other libraries is a stupid complaint. It's the kind of complaint that only complete newbies might have - anyone with any idea about what they're doing will not have a problem with it.
Not that it's hard, anyway. You don't need any actual libraries to use OpenGL. You just need up-to-date headers, and a way to link to the system's OpenGL library. The easiest way to get those is to use an extension loading library, which provides those headers, links to OpenGL at run-time (same thing that Direct3D does) and makes all of the available extensions available automatically.
Complaining about libraries like GLUT, GLFW, or SDL is completely irrelevant. If you were using Direct3D, you'd be writing windowing code directly on top of the Win32 API, which you can still do with OpenGL. If you want to use one of these libraries to handle the platform-specific windowing and OpenGL setup, you can. You don't have to - even in portable games, you can still just write your own platform-specific windowing and OpenGL setup code.
So the development setup and libraries are different. So what? It's fairly simple to work out for anyone but a complete beginner. On platforms like iOS or Mac OS X, OpenGL is actually trivial - everything you need is included in the platform's SDK. Same with Android. Same, to an extent, with Linux. OpenGL is only really awkward on Windows, and you can guess who's to blame for that.
But there are plenty of legitimate complaints about OpenGL.
The API is much more complex than Direct3D. There are dozens of ways to do things, with no way to tell which is the best way (short of looking at what Direct3D is doing, and copying that). Behaviour and performance is inconsistent between driver vendors, versions of driver from the same vendor, and across platforms. Some OpenGL drivers (Intel's on Windows, until very recently) are effectively unusable, or don't expose all of the hardware's functionality that Direct3D does. There are lots of legacy issues to carefully creep around, and they can still bite you even if you're not using the legacy features. The shader compilers aren't consistent between different vendors, and are often incredibly slow. Linking shaders is kind of clumsy. Since the API and drivers are much more complex, it's more common to run into severe performance issues caused by the driver than it is with Direct3D (and consoles are better than Direct3D in this regard, because their graphics drivers are so thin that they essentially don't exist).
Even picking a decent subset of OpenGL to use is a pain. Do you use OpenGL 2.x and ignore large chunks of it to get a reasonable API, and broad support? Do you go for OpenGL 3.x or 4.x instead? Which version? Which set of extensions? It's dead easy for OpenGL ES, though - 1.x (fixed function) or 2.x (shaders), neither of which carry around the legacy crap that OpenGL still carries.
Dealing with all of that is the problem with using OpenGL, not working out how to link to it.
Oh, and documentation. That really is shocking, mostly because the majority of documentation or guidelines you might find are written for OpenGL 1.x, and are completely obsolete.
There are some good beginner-level guides to getting started with modern OpenGL:
Question, why don't people just use OpenGL ES on Windows/Mac/Linux as well then? It sounds like the right level of not-backwards compatable, yet available everywhere
This is why I chose to learn DirectX first rather than OpenGL.
It's kind of like the Android vs iOS debate. DirectX is like iOS in that there's one main feature set which runs on a lot of devices. Whereas OpenGL was pretty fragmented because of extensions.
The only fragmentation on DirectX that I remember is the vertex-texture fetch (which was only supported by NVIDIA cards) vs render to vertex buffer (only supported by ATI cards) features in DirectX 9.0c. After that, Microsoft really started emphasizing the notion of a standard feature set.
OpenGL moved away from all of that starting with OpenGL ES which is partly why it has became so popular.
Well -- after some significant time wrestling with OpenGL/DirectX under Windows -- I got totally fed up and moved to OS X with Objective C... Objective C syntax certainly takes some getting used to -- but the iTunesU developer videos and overall documentation seems much better and more cohesive than the MSDN stuff ever did.
I guess, however, it's always a balancing act... I mean -- I thought Microsoft was supporting C# as its primary language -- however, Managed DirectX (with examples and documentation) never seemed to arrive (in fact it seemed to disappear)... C++ +MFC has to be the worst development schema ever created (luckily I managed to skip that part of the development history by ducking to VC roles)... I think the only thing that is clear about MFC is that it is a M. F. C. and anything to get away from it including Interop and switching to OS X/Objective C (even if Objective C is replete with legacy C APIs etc.)...
Matter of opinion. I vastly, vastly prefer it to D3D and its unnecessary OO abstractions. The GPU is not "object oriented" and the OpenGL state machine is much closer to how the GPU works. It takes some time learning to understand the underlying philosophy, but if you stick with the core profile (3.2 or newer) and ignore the legacy profile, it will prove a really quite beautiful API to work with.
> DirectX is rewritten often (with new versions) to accommodate new ideas, whereas OpenGL new features/ideas are just "tacked" onto the existing code base
What's the problem here. New functions are added, but older ones are still valid use-cases. Seriously outdated stuff that no hardware no longer really natively supports, such as the fixed-function pipeline, is clearly and most obviously flagged everywhere as outdated legacy ("compatibility profile") API and its use discouraged. However, if you really wanna fire up an old OpenGL 1.1 app you wrote as a teen some 15 years ago, you still could. Now that's actually rather neat, isn't it?
> OpenGL also has a huge barrier to entry compared to DirectX.
Nope? Don't even need an SDK for that. It's pre-installed on all major OSes and implemented by all GPU drivers for years.
Even better is, there's no need for what in D3D results in a DirectX install on client machines. That annoys me actually, whenever I install a new game with Steam, it proceeds to "install DirectX" (sometimes v9, sometimes v10, sometimes v11). Sometimes even just when I start a readily installed game after a longer period of not playing it. WTF? DirectX has been "installed" so many times by now, isn't once enough? And can't DX11 automatically provide a DX9 layer for older games? I know that my GeForce-provided GL 4.2 driver happily implements "all GL versions ever".
> MSND DirectX resources are fantastic, whereas resources on OpenGL are often out-dated and generally pretty crap.
Khronos' GL/GLSL API documentation pages are all comprehensible, accurate, up-to-date and complete, and so are their spec PDFs if that's the preferred format. Their Wiki pages also provide a wealth of absolutely current high-quality information.
As for tutorials out there, some are good and up-to-date, some not -- same for D3D I'd wager?
> whereas OpenGL requires GLUT, or GLEW? I think. Perhaps FreeGLUT? OpenGLUT? Or can you just use SDL? Or none of them? What? Exactly.
DirectX 10 and 11 are part of the OS and there are no standalone installers anymore. What Steam does is noticing that the game has a dependency on DirectX and it launches a stub installer but apparently cannot actually check whether it runs on a recent Windows version where that thing does essentially nothing. There is a DirectX-9-compatible layer in current DirectX.
I don't agree. SFML sounds at first like a really good idea, and at first usage seems pretty cool. But there are invisible walls and they're not hard to run into. The API reflects the developer's way of thinking and I find it somewhat odd (though, to be fair, I'm sure plenty of people would consider the API I've built for my own private library to be somewhat odd, too), and I noticed in my most recent experiments with the 2.x beta that you end up having to drop out of the friendly confines of the SFML libraries into the underlying implementations alarmingly often to do anything interesting.
There are also what I consider reliability issues. SFML 1.6 and 1.6 have reproducible startup issues on any semi-modern ATI/AMD graphics chipset I was able to test it on (2008 to 2010 vintage). The recommended solution from the developer was to upgrade to the API-incompatible 2.x alpha--which still isn't even released as a 2.0 today. It's open-source, and totally his choice whether or not to fix bugs like that (at the time I wasn't capable of doing it myself); I'm not saying "oh, he's not at my beck and call, he sucks," but I'd be very wary of going back there again and it did play a factor in me choosing not to use SFML for my current project.
> Old John Carmack at Id Software took one look at that trash and said, "Screw that!" and decided to write towards another API: OpenGL.
"Old John Carmack" makes me think he's some sort of half-myth crazy coot mountain man, which made me realize something: There will some day be mythology about computer programmers.
If you mean one day people will remember the names of programmers in a mythological manner, I doubt it. If you mean there will exist a mythos of programmers. Their persons. Their ideals. Their exploits.
There are books: "Dealers of Lightening", "What the Dormouse Said", "Hackers: Heroes of the Computer Revolution"; and beyond them a large amount of information available only online ( textfiles.com is fantastic ). Things like 2600 and phrack. All of the old 'zine releases made by various cracking groups, like "Legion of Doom", where at this point all that remains are various communiques thanking lists of pseudonyms that will likely never be linked to actual names. Older stories, like Mel. The Woz's hacks and Gates' angry letter to early software copiers. The lexical analysis to pin n3td3v as Gobbles Security ( still unconfirmed afaik ). Some of the older bits by related groups, such as phreakers, aren't necessarily programming mythos, but fall into the same crowds and history folders, individuals like Captain Crunch. There are Torvalds and Tanenbaum's humorous exchanges. The first worm. utf8's placemat birth. Larry Wall's idiosyncrasies. Well, all programmers idiosyncrasies. esr. rms. dmr. ken. Anyone else that can be identified by a tla, plus or minus an underscore prefix. Hacker ethic, free software and open source manifestos, deconstructions, apologies, rebellions and dismissals aplenty. Hackers v crackers and demoscene's birth from the latter. Spacewar, nethack, rogue and a thousand MUDs to idle your time away. Many other things that haven't popped off the top of my head.
There won't some day be a mythos such as this. It is here now. And it will only grow further.
There already is. Stories about Seymour Cray building (and then burning) a boat every year, tons of stories about Woz, tons of stories about rms, tons of stories about any number of the other greats, some true, some embellished, some made up from whole cloth.
There has, is, and will continue to be. Seeing that Carmack practically invented the FPS genre over 20 years ago and he's still writing games today, I view him as a mythical figure.
One reason is just that since game development is mostly all windows based, new programmers learn to use windows tools.. and thus become more experienced with them and not wanting to switch to something completely new which would require a loss of productivity.
Also, and that's really only based from my experiences, it seemed to me that game developers aren't that much into "hacking" linux. Things should just Work. The terminal is a deprecated tool compared to new intuitive IDE. Most of you won't agree with these statements (And I clearly don't) but that's really what I've felt every time I'd talk with a game developer friend or colleague. And, this mentality goes back to when I was an engineering students.. While I had fun learning linux and Python, friends graduating in video gaming would laugh of Python for being a "scripting non-performant language" or they would play around in visual studio with C# and making forms or build some games with XNA.
So, sometimes I think it's not that much about linux not being "good enough" but more about video game developers' mentality to not fit with it. Man, I won't lie, I'm an experienced Linux developer and user and I always have to lurk in all kind of forums and RTFM on every small things that needs to be installed (which is not in the standard library). And, it often happens that I break stuff and spend countless hours trying to fix it. Hell, even last week, I've upgraded a server on Archlinux and it totally broke my system without any warning. I really needed help so I went on #archlinux and I've been told that I should have read the archlinux news before tempting an update. Fair enough, I guess.. but I can understand that on a extremely tight schedule, games developers can't waste time on stuff like that. Compared to say, [next] [next] [finish] windows install where everything just Work.
Please, don't down-vote me if you don't agree with things I've seen from my experience. I don't agree with it neither, but I thought I'd share it here. It's really something I see coming again and again from all kind of different video games programmers.
As a games programmer with coming up to 5 years experience, I can attest that this is largely true. At every studio I've worked at, I've been given strange looks for even opening a terminal, and half of them haven't even heard of Vim. Regardless of what the actual strengths and weaknesses of the tools are, most game developers are so heavily wedded to VS and Windows that there is a huge inertia on that side.
Personally, the mobile game that I'm developing now is being developed entirely under Linux with GCC and GDB, and although GDB has a bit of a learning curve I now find it easier and quicker than VS.
>Hell, even last week, I've upgraded a server on Archlinux and it totally broke my system without any warning. I really needed help so I went on #archlinux and I've been told that I should have read the archlinux news before tempting an update. Fair enough, I guess.. but I can understand that on a extremely tight schedule, games developers can't waste time on stuff like that. Compared to say, [next] [next] [finish] windows install where everything just Work.
I don't think Arch is a fair comparison to make. The Arch wiki describes it as:
"Arch Linux is an independently developed, i686/x86-64 general purpose GNU/Linux distribution versatile enough to suit any role. Development focuses on simplicity, minimalism, and code elegance. Arch is installed as a minimal base system, configured by the user upon which their own ideal environment is assembled by installing only what is required or desired for their unique purposes. GUI configuration utilities are not officially provided, and most system configuration is performed from the shell and a text editor. Based on a rolling-release model, Arch strives to stay bleeding edge, and typically offers the latest stable versions of most software." [1]
I use Ubuntu primarily, but use Windows exlusivly to use LabView for a local robotics team. I have had issues with both systems. The difference for me is that when the issue was with Ubuntu, I can norally get it back to a workable state withing 10 minutes, or if desperate do a full reinstall in under an hour after which point I have all of my same programs installed, and my configurations/user data restored. When my problem is with windows, solving it is a full days work (or more).
[1]https://wiki.archlinux.org/index.php/Arch_Linux
> I've upgraded a server on Archlinux and it totally broke my system without any warning. I really needed help so I went on #archlinux and I've been told that I should have read the archlinux news before tempting an update. Fair enough, I guess.. but I can understand that on a extremely tight schedule, games developers can't waste time on stuff like that.
"Be careful with updates, they might break things" is common advice. "Let other people apply the service packs, and see what the problems are, before you go ahead" was (is?) common advice for people running some MS server stuff.
As 2D video game developer with many titles under my belt, I can assure you it's simply a matter of market share. The general rule (before hybrid CDs) was write it for Windows and if it sells, then write it for Mac (later we would release both Windows and Mac on a hybrid CD).
Traditionally Linux was all over the map in terms of graphics, and market share was nil. Nowadays, Linux is prevalent among computer geeks, but in the general population (that have the real $) it's effectively nonexistent.
Well, most game development happens for consoles - so if the question is "Why game developers use Windows" not "Why PC game developers use Windows", then it still is an interesting quesion.
If you're coding for PS3 or Wii, why is Windows better than Linux or Mac? Marketshare doesn't explain that.
The tooling is generally better on Windows, or was. A buddy of mine who does some pretty intense PS3 work switched to a Mac about a year and a half ago and said that it was rough going when he'd tried a year or so before that.
..and then another decade passed and OpenGL ES won every growth market, returning PC gaming to its niche status. Massive niche, of course, but niche. More Temple Run, dears?
Only on mobiles and we have to thank the iPhone for that.
Before the iPhone there were only crappy 3D apis available, OpenGL done with software rendering,the J2ME 3D Mobile API and even the PocketPC had brain dead versions of DirectX.
As for the game consoles, it is a myth in the FOSS that they use OpenGL.
Except for the PS3, which has OpenGL ES + CG, most consoles not from Microsoft have an OpenGL like API, which is not the same thing.
As for the PS3, most developers actually make use of the proprietary CGM API.
Game developers target Windows because it's(was?) popular. They'll target mobile devices more and more alongside consoles more and more (although, that line is already starting to blur).
Users gravitate toward what's available, what's usable and if possible what has the most vibrant ecosystem of apps. Games, if the developers want to make any money, will target that platform. If the apps are common on a specific platform, then it stands to reason tools for developing other apps/games will also be common on that platform.
One of my friends is a gearhead, this is what he had to say:
"...Shader Model 1.1, which basically was "Whatever the 8500 does.""
Correction - it was Shader Model 1.4
The GF3 already did 1.1, and the GF4 Ti (3 years later) did 1.3.
A very good read though. A decade of drama put into simple words.
They closed the question. It irritates me that the self appointed stackoverflow police are very quick to close questions. They barely give them a chance to see if something good or creative comes a long. (Such as the first comment)
Games are already being ported to other operating systems because of microsoft greed.
MS wants to copy everybody with an app and game store, which will be the only authorized way to download and install anything on to windows 8+
They also want to charge 30% to any vendor that wants to sell in their store. Since game makers like Steam have their own content delivery system which will be blocked by MS they already panicked and announced a new linux client and have claimed to be committed to switching everything over the next few years and dumping windows
"which will be the only authorized way to download and install anything on to windows 8+"
No, you can install any windows application on Windows 8. Windows 8 != Windows Store. The desktop is only click away from the Start Screen (which can be completely removed, if needed, using a free application) and remains as flexible as on Windows 7.
ie. Steam will work fine on windows 8 as on Windows 7.
Yes, Steam works fine. However, Steam games are patchy. I've got at least half a dozen which no longer work because of W8. I tried all the fixes drummed up by the Steam community all for naught.
Reinstalled Windows 7. BAM everything works exactly how you'd expect. Hell, it's refreshing not having a fullscreen start menu which takes 2 seconds to start using due to the fade in effect and disorientation you get from transitioning to "Metro" from "Aero."
I whole heartily believe Windows 8 is a waste of money and time for desktop. It should've removed Aero and been Tablet only.
Whoa, Microsoft producing a new version of their OS (think Millenium, or 2000, or Vista) that has absolutely no use case and is rubbish, because they make money off per unit sales and having an absurd amount of market dominance?
Windows 8 was not an operating system meant to be better than what came before, it was meant to get Microsoft some of that app store money. The priorities were thusly apparent.
The CEO of Steam Gabe Newell disagrees with you. They (and Blizzard) are moving away from Windows 8 entirely.
Steam is coming out with it's "steam box" which is a linux run gaming and home entertainment system. You can also run windows on it if you want it's not a proprietary console device apparently.
There are articles everywhere how Windows 8 completely killed the PC gaming scene
Whether he agrees/disagrees won't change the fact, which is, Steam runs on Windows 8 and it runs fine. There are a lot of reasons not to support an OS besides compatibility.
With Steam, it wasn't a compatibility issue, it was more to do with Valve feeling threatened by the Windows Store Model. Giving away X of every purchase to the store does undermine their profit model. The same reason why you won't see subscription based products such as Dropbox/Google Drive/Skydrive with upgrade options on the Apple Store.
However, I can't see how Windows 8 threatens the traditional form of sale, that download to desktop and play.
The fact the Gabe was attacking W8 is because of Metro, not Windows 8 itself. I blame the sensationalist media from manipulating it into what it became.
The real threat is Metro, not Windows 8. It is the anti-freedom, walled garden that everyone should fear.
The console game makers don't have any obvious way to cut the retailers (or the console makers) out of the loop. In this case they do: Let people buy games for Linux instead of Windows and get an automatic 42% bump in revenue from everyone who does. Why wouldn't they do it? Especially given that they already have to write portable code in order to support Windows + non-Microsoft consoles + maybe Mac or (depending on what kind of game) Android and iOS, etc.
It's a lot easier to port something to the third OS once you've already done the second one, because the second one caused you to identify and separate all of the bits that are platform dependent, or (better) choose platform-neutral libraries rather than platform-specific ones in the first place.
And if Linux becomes a common gaming platform, and is free and capable of running on all computers, it becomes easier for game makers to ultimately say "we're not supporting Windows anymore, here's a free Ubuntu live CD" -- or just raise the price of the Windows version of the game by 42% more than the Linux version to make up for the 30% cut Microsoft is taking and let the free market do the work for them.
The magic problem is people hate change. They don't want anything to change. They want to get Call of Boring 15 DLC Pack 582 and zombie their brain out for a few hours.
Installing Linux, although in many ways superior (I think iptables is such a better firewall, no need for antivirus because of a good privileges model, Apparmor can be really useful, packages are amazing and almost every Linux distro does them pretty well) is too much of a hassle for the 90% of people that want a computer like they want a hammer or TV. It is a tool, you hit the button, something you want happens. Having to understand the entire thing is slightly more complex than that requires way too much mental exertion.
It is, in the end, why "Linux on the desktop" never happened. It was never the default. It was never on the Best Buy shelf when grannies 15 year old laptop broke and she needed a new facebook machine.
I don't think anybody really expects it to happen overnight. But "it doesn't run games" has always been one of the major sticking points behind home Linux adoption. Just the native availability of major titles would be an enormous boost. Then throw in that Microsoft has given game makers a financial incentive to promote Linux gaming because game makers keep more of their revenues when their customers use Linux instead of Windows, and you have the seeds of change.
And nobody ever said it would start with grandma buying a computer with Ubuntu from Best Buy. Standard issue grandma is not a big gamer. Instead the gamers who are already at the margin of Linux adoption, who just need a little push, get it from game makers who now have the incentive to promote Linux adoption because it puts more money in their own pockets. They charge less for the Linux version, or release it a month earlier than the Windows version. Soon a lot of the people who call their computer a "gaming rig" are dual booting Linux, and bitching at any game company whose game requires them to boot into Windows. Only after that happens do you start seeing Ubuntu on computers at Best Buy.
There is a demographic issue there as well: The gamer population on the verge of swapping windows for linux due to game availibility, might be getting through life changes that will reduce their impact as a customer base, although they purchase power might have substantively grown with them. The best bet would be for a video game console base on a linux distribution to serve those linux games, that would not have a entry tax to get published as opposed to the current competition in this market.
Demand seems to always drives supply in this electronic economy (not based on any studies or scientific process; more like an observation).
Even with the rise of Linux -- and my belief is that it will rise very rapidly with the advent of Steam for Linux -- PC game developers have years and years of development experience advantage on the Windows platform.
When I and my two co-founders (all of us programmers) started ArenaNet to build Guild Wars in 2000 we considered *nix vs. Windows for the server platform and decided that we'd be more productive continuing to build on Windows based on our previous programming background. All of us had extensive experience with low-level coding on Windows (device drivers, async coding with IOCompletion Ports) and knew that it would take time to replicate that expertise on another platform.
Beyond the learning curve we knew it would be more convenient to be able to run the game server and game client on the same computer. During development I ran two copies of the Guild Wars match-making server, and two copies of the game client, on a single desktop box to test the matching algorithm code. Having to manage that process across multiple computers would have been more of a hassle.
On a personal note, I've been spending a lot of time working on building/using Linux in a virtualized environment (https://github.com/webcoyote/linux-vm). Linux is an awesome development environment! While I much prefer doing C/C++ development using Visual Studio, Linux is better for other programming languages I use or have tried: Ruby, Python, Go, D, and Erlang. And the ecosystem, with projects like Redis, ZooKeeper, Postgres and a slew of NoSQL servers makes it incredibly powerful because it isn't necessary to write everything from scratch (except SQL), as we did for Guild Wars.