On the other hand, porting from D3D code, you don't touch the hairy parts of OpenGL at all because D3D is functionally quite close to the "clean parts" of GL. But there's still some software overhead to overcome. They might also be using a proprietary D3D-on-OpenGL layer, like the ones being used for some Mac ports of Windows games (edit: seems like they are not).
I hope that Steam, and gamers that use Mac and Linux in general, will push for bigger changes in OpenGL, both specs and implementations.
disclaimer: I write GPU drivers for living
Is that a reason for us not to take you seriously? Feel free to disclose any connection you have with the subject but please don't disclaim responsibility for your comments. :)
Unless, of course, you want to..
(Sorry for the no-content post but I see this word used in every second thread these days and it's one of the things that might be driving me nuts.)
So, "disclaimer" is spot on, I believe, because application writers might like to shed a tear, but not sympathize too hard...
So, if that is the correct interpretation then is not a disclaimer, more a disclosure. Which is what noibl was trying to educate exDM69 about.
But it is really all idle speculation until exDM69 chimes in.
Yes. Someone should do it. The major industry players seem to have the interest but lack the will to put in the resources it requires.
> Right now I'm excited about OpenGL ES 3.0 for mobiles
Prepare for disappointment.
This means I am constantly really impressed by the fact that we get anything done at all.
Unfortunately, I am also a crazy arrogant monkey, so am also constantly really deeply disappointed by the fact that we haven't all got jet-packs and holiday homes on the moon yet.
Small nitpick: OpenGL would be catching up with Direct3d not with DirectX(which includes parts like input or sound that AFAIK are out of scope for OpenGL)
edit: found HN discussion about this link: http://news.ycombinator.com/item?id=2711231
The main reason why OpenGL did not get a restart were because of pressures from CAD and 3D Modelling Industry.
These modern GPUs are very generalized hardware now. Their drivers are "simulating" all kinds of features. Its a bit like OSS vs ALSA. They should just make the GPU a standard compiler target, and offer only the bare minimum (video-mode-setting) at the driver level.
Let the game industry write/design their own API on top. Let the CAD community do the same for their usecases. I know, with Mesa, this is kind of the setup. Except mesa isnt executed by the gpu: it runs on the CPU, and from there it executes code on the GPU.
Wouldnt it be much nicer if GPUs just behaved like CPUs? They are just optimized for different types of proccesing. Thats the interesting end-game: could the CPU be optional?
And the CAD/DCC market gets a say because they pay 10-100x more per seat than consumers.
No, it is called the GL 4.2 core profile. GL 4+ core profile is a superset of GLES2. It does not include any of the stuff left behind from GL 1.x days.
GLES2, and Big-GL3+ have the legacy graphics features removed but the API is still horrible. More on this in my other comments in this thread.
I think what you meant is that OpenGL ES may one day be more popular and more used than the "full OpenGL", because there are a lot more mobile devices + WebGL.
In any case, I submit to the opinion of experts - my experiments with OpenGL have been simple and, when I attempted anything more complicated, they resulted in much more frustration than learning.
fixed that for you.
Quoting from the blog post: "We’ve been working with NVIDIA, AMD, and Intel to improve graphic driver performance on Linux. They have all been great to work with and have been very committed to having engineers on-site working with our engineers, carefully analyzing the data we see. We have had very rapid turnaround on any bugs we find and it has been invaluable to have people who understand the game, the renderer, the driver, and the hardware working alongside us when attacking these performance issues."
Leaving aside the amazing fact that Valve is achieving higher FPS on Linux than on Windows, the great thing about efforts like this one is that we can all expect the Linux desktop to become even smoother, faster, more seamless. I love it!
I'm not even talking ideologically here. The OSS drivers are included along with the kernel in most distros. They're what you get by default when you boot a new Linux install. They get upgraded alongside other core packages by the distro's package manager and are generally quite up-to-date compared to the "official" drivers in the package repository (if they're even there at all).
I use the OSS radeon driver at home and the OSS nouveau driver at work, and both are quite adequate for my purposes (which involve no real 3D). I think it would be a bigger improvement to the ecosystem to have these drivers get some love.
(Or, alternately, for the proprietary drivers to go open-source and be included with the kernel).
Their work with Intel is on the open source drivers.
Near as I can tell, the only Intel display adapters are the ones integrated into laptop motherboards. You may be able to buy the occasional desktop motherboard with integrated Intel graphics, but good luck finding one that can support two DVI monitors or dual-link high-resolution displays or anything fancy like that.
However, the last I looked into this was a while back; perhaps the situation has improved?
The HD4000 isn't the most capable chipset on the market but for most applications, light-duty use, it's fine. Even the basic versions of this chipset could support two 24" screens without much trouble. Just don't expect benchmark-shattering 3D performance.
Of course that probably doesn't prevent them from giving the OSS drivers some love, but that is another story.
I'm actually quite glad that adequate drivers are available, even if they're not open source. That outcome wasn't guaranteed.
Well I don't know... For the sake of attracting more users, Linux usage tends to become more and more Windows-like. Best illustration is the dropping of network transparency in Wayland because "most people don't care".
I suspect that there is going to be more and more concessions like this.
Moreover, as I understand it, you will be able to run X.org as a Wayland client, so you're not losing anything compared to today's setup. (IOW, you will be able to run remote X11 apps transparently over the network on a Wayland server.) Not only that, but Wayland clients can use any other network protocol -- VNC, RDP, etc.
IMO Wayland too is a big win for the Linux desktop.
Not really, application developed over X.org control the display through a drawing API. This is why network transparency (among other things) is so simple. If people start developing application for Wayland this won't be that simple/elegant.
We are dropping a level of indirection for performance purpose. This is quite unique in the history.
This is a big lose for stability and usability.
Fetching untested software and installing it manually is very Windows-like.
I wanted to read some real hacker news discussion, which we're now getting with a better title.
Free software proponents tend to be less level headed on the issue. I think there aren't a lot of true "windows fans", but on the other hand, there are a lot of "linux fans" who live for the politics of free software. Lots of people use both proprietary and free software. I do. We are the ones who don't care for your politics. Unlike lots of hardcore linux proponents who are actively hostile to closed source software.
On purely technical terms, I like linux better than windows because of its ecosystem (terminal software, scripting, unix philosophy) but certainly not because I'm enamored with the politics of the FSF. OTOH I use windows not because I love it, but because I need it. That's being practical.
People who put other groups into boxes, like "anti linux" "Windows fans" aren't practical, they're turning software into a religion. You're even speaking of a "revolution".. for god's sake.
You're the only one here treating it as a serious and solemn issue.
But I feel very rejected by the GNU mindset that 'every proprietary software in my platform is bad'.
Why wouldn't they switch back to OpenGL on windows then ?
Besides the absolute numbers I wonder how the hardware is behaving regarding resource intensive applications games are. From personnal experience every laptops and computers I have owned had had their fans spinning much louder when running ut2004 (and other games) on linux than on windows. Whether because it's getting hotter for less computational power or because of more aggressive default fan settings in distros.
Because no-one likes to program with OpenGL. Because of it's history, it is one of the hairiest APIs out there. Try to propose using OpenGL to a game developer and you'll be laughed at.
Many GPU vendors have been doing a bad job with OpenGL windows drivers and they have been really buggy, but the situation is improving.
I write OpenGL because I don't use Windows. I still hate it.
I usually use the latest GL only. Today that would be GL 4.2 core profile. Or GLES2.
Sure the old fixed function pipeline and immediate mode are gone. But the API still sucks. Bind-to-modify semantics, using integers for handles, global state everywhere, etc. They are the big problems, for both, users and implementors.
And I don't really know what 'global state everywhere' means. Do you mean that you keep OpenGL-related info in your program's global state? In that case, you should be using some AppState object to keep the state (and pass a pointer around). Otherwise, what global state is there "everywhere"?
> With my limited knowledge of OpenGL, I know you get a number, describing a resource in video card memory. I bet for an OpenGL implementation, binding a resource id means finding a pointer to the resource. Sounds reasonable to me.
Wrong. There's a lot of software in between, before we start talking about GPU resources, so it refers to a structure on the CPU side. The integer-to-pointer lookup in between is another cache miss that we don't need (it has measurable L1 cache effects). This has, to some degree, been worked around by adding state objects like "Vertex Array Objects" (similar to D3D input layout).
> Would you rather get an invalid pointer that you should refrain from dereferencing (because you'll be reading from RAM, not GPU RAM)? That would've been a disaster.
Opaque pointers are used quite a lot in that way (e.g. D3D, via object pointers), it's a common idiom in C that adds a little type safety and documentation. In this case the pointers would be internal structures in the GL driver's user space. But you should not dereference them anyway. Unix uses integers as handles because the stuff it refers to is inside kernel memory, and you could poison the kernel by giving invalid pointers.
> Do you mean that you keep OpenGL-related info in your program's global state? In that case, you should be using some AppState object to keep the state (and pass a pointer around).
I don't. OpenGL does. In OpenGL, you have a global variable (actually: thread local) called the "context" which contains all the state of the rendering pipeline. In essence, it's a global variable that has side-effects when you access it.
It has 1) inescapable measurable performance overhead and 2) affects state that changes the rendering output.
So if you macro it, you're forcing yourself to calling glBindTexture even in the cases where that wasn't necessary. (say, if you had just called glBindTexture and knew that the right texture was bound anyway). And, in addition, you have to remember to store the previous state of the bound texture in case someone down the execution path wants to use that value. Macroing things which change global state is, in general, a terrible idea.
If you don't macro it, your code has to remember whether the right texture has been bound, and design around that. It can be done for sufficiently simple cases, but the general solution also has performance overhead (checking a local variable before calling glBindTexture will get you into branch misprediction problems)
You can imagine complicated solutions like state batching: writing a better, retained-mode domain-specific language atop OpenGL which will analyze the sequences of your calls before making it, and then remove all the unnecessary glBindTexture, etc. calls. But it's a major pain in the ass that simply shouldn't exist.
glTextureImage2DEXT(texId, GL_TEXTURE_2D, ...);
No. SDL only deals with the platform specific parts like windowing and OpenGL initialization. It does not do any OpenGL for you.
GLFW is another popular tool for the same purpose.
SFML is better, but the developers are pretty uninterested (SFML 1.x was declared "legacy" while there was a bug that prevented it from running on modern AMD graphics cards) and I wouldn't trust it as far as I could throw it.
I do all my graphics work in OpenGL, but I'm aware of its shortcomings.
OpenGL dominates the massive mobile industry (those billion or so Android and iOS handsets run and live on OGL ES 2.0), and OpenGL was the premiere desktop gaming API until relatively recently.
Then it fell behind not because it was a defective API, but rather because the OpenGL working group was far too slow to adapt to hardware improvements -- mired in infighting and bureaucratic BS -- while DirectX raced ahead with things like geometry shaders.
Of course mobile has completely changed the equation, and suddenly people are writing OpenGL engines that only make sense to use against all other targets (Windows, Linux, etc)
Because for many cards, the opengl drivers are awful. If you are only implementing one of DirectX and OpenGL on windows, there really is no contest. Of course, you could do both, but then you have another option to give users, and two very different codepaths to test.
John Carmack was seriously angry about ATI's drivers when the game "Rage" was released.
"The driver issues at launch have been a real cluster !@#$, [...] When launch day came around and the wrong driver got released, half of our PC customers got a product that basically didn't work. The fact that the working driver has incompatibilities with other titles doesn't help either. Issues with older / lower end /exotic setups are to be expected on a PC release, but we were not happy with the experience on what should be prime platforms."
I unfortunately made the mistake to buy an AMD 5770 when I assembled my last desktop computer. This hateful GPU. If you even care one bit about either OpenGL or Linux, you are *ed with AMD.
The open source drivers for linux are too slow for anything that matters and the closed source drivers made by ATi are too buggy to be used at all, they couldn't even run Gnome 3 without so many glitches, for so many months without improvements to the drivers, that I felt true despair.
You want OpenGL or/and Linux, and you need performance ? Get a Nvidia. There's nothing but Nvidia that works well with either.
If you don't need performance, then simply get something that has an integrated intel chip.
I will never, ever think about buying anything from AMD again. Pure garbage. Their newer CPU line, the Bulldozer, is also garbage that runs slower than the older generation on many tasks.
[OT: Why is your nick green?]
I couldn't run any game under wine either without crashing, even older games that usually run ok like Baldur's Gate 2.
The open source driver are much more stable but on the other hand they are MUCH slower. Like, turning a good, midrange card like the Radeon 5770, into what low end crap would do in Windows. But you're better off with that mid range or a high end card since a low end card is not going to be usable at all for anything that makes use of the GPU, with the open source drivers running even free games like Xonotic on a low end card is plain crazy.
On a game like Warsow, the difference between the open source driver and the catalyst can be as high as 34 fps vs 374 fps (34 fps is not enough for a fluid gaming experience, you need at least 50 to 60. Gaming is not like the 24 fps of motion pictures with each frame blurred giving the illusion of movement even with a low framerate since each frame rendered in a 3d game are sharp as hell.) on a high end card, and 17 fps vs 45 fps on a low end card, which means that the game is not playable at all on the open source driver, while it's tolerable on the catalyst.
AMD's stuff is not worth the pain. Don't buy AMD unless you have a masochist streak. I wish I had known before I built my desktop, I was being influenced by all the free software maniacs who were praising AMD to heaven because of their policy on giving out the GPU specs. Free software be damned, I will take nvidia's closed source driver over shit that is not working the next time. My previous GPU was a Nvidia Geforce 7600 GT and it worked like a clock.
... if in the next 6 months the situation doesn't get any better I feel I am going to physically destroy the GPU with a hammer, send the pic to AMD and buy a nvidia before the need for a hardware upgrade even shows up. The fact that it has drivers problems even under windows doesn't help making AMD's case.
TBH I also gave up about a month and a half ago it could have changed recently but I doubt it.
Waay to slow for gaming (I tried HoN and NVN) also WebGL was really slow (software rendering kind slow), since this is a thread about games I think this use case is required - I didn't try the OSS drivers in ~6 months so it could have changed. But Gnome/desktop was really stable so if you only need desktop apps it does work.
Multi monitor support should be fixed, latest Nvidia beta drivers have Xrandr. It's still a new feature so it can have a few rough edges, though.
In fairness to AMD, AIUI that is by design, with the intention to increase clock speeds and core counts to compensate. However, that line of thinking was the downfall of the Pentium 4.
Unfortunately they've dropped laptop support so not really
an option these days. The open source Intel/Radeon drivers in Mesa look to be the only game in town in the future.
edit: to expand, they've said many times that they're not going to support Optimus and that covers all except the
luggable laptops these days.
I'll never buy ATi again after my experience with them.
How could AMD stay in business with such subpar products is beyond me. I know they'll never regain my trust. That's just not possible. Too many problems on too many platforms.
a) radeon 3850
b) fusion e-350
c) a mobile x700
The only nvidia chip I have is some version of a mobile quadro and it has very bad thermals and will frequently segfault.
I'm glad for your sake that you haven't hit any driver bugs though.
Intel cards, famous for lying about which set of OpenGL extensions are really available in hardware.
All the GPUs made in Taiwan and similar countries delivered with cheap desktop PCs (< 500 euro).
I'd love to see real performance improvements when it comes to Linux, but right now I just see a company that (it seems) has something to gain by smearing Microsoft giving out results without any methodology or explanation. They never even said how they avoided the typical pitfall of "FPS" benchmarking - making sure the GPU and the engine is actually drawing the same things (that means making sure all the same shaders are executed, similar extensions are used so that image quality is identical).
Until we get some confirmation in the wild I'll remain highly sceptical towards claims on the blog.
For most people, they just want the game to run such that the difference is not big enough to warrant switching OSes for gaming. For me, I just want it to be close enough that I can't be bothered to reboot.
I've been getting some great performance through wine for some games for quite a while now. Under the same settings I've occasionally had slightly better or slightly worse performance depending on the game, patch-level and settings. For me the bar is whether it is playable and looks decent at all.
Windows makes you jump through many hoops (think registry hacks) in order to disable that function on almost any version of their OS. I don't know of any Linux distros (correct me if I'm wrong here) that do that by default.
Disabling it in Linux: depends on the desktop environment. KDE has it on by default, can be disabled in much the same way as on Windows. Same for the GNOME desktop in Ubuntu: enabled by default.
Disabling mouse acceleration if your mouses' firmware does it: hopeless in Linux and in Windows. If you're lucky the mouse vendor has some Windows program that allows access to it.
Note that in all cases, disabling is easier or possible on Windows than on Linux.
No it isn't. If you uncheck the pointer precision box, there's still a level of acceleration happening behind the scenes, as late as Windows 7, that still requires registry changes to disable.
http://www.x.org/wiki/Development/Documentation/PointerAccel... has a good rundown of all the options, perhaps the most important being
the "Acceleration Profile", which can be set to -1 to disable acceleration.
As for firmware related acceleration, I've not ran into any - but I can certainly imagine this being something easier to manage under Windows, considering practically all "gaming" mice are aimed at Windows users.
It's a feature of some mice. If it can't be disabled, it's a misfeature for gaming. Curiously some gaming mice are sold with acceleration that can't be disabled: http://www.saunalahti.fi/~cse/temp/mice.html
And all those APIs are gone. The amount of code required in ES2 to put anything on the screen at all is staggering.
But that said: the per-vertex APIs really are broken from an optimization sense, and serious apps absolutely must do their work on buffer objects. (Display lists have historically been a way to cheat this, at the expense of huge complexity in the driver. This is one of the bad APIs exDM69 is talking about, I suspect.)
Honestly, I view the old GL stuff as a sort of toy language for graphics--and yes, it's annoying to write the boilerplate (oh so much boilerplate) to get a spinning quad onscreen nowadays.
At the same time, if people need a toy language to get into graphics, we really should just write a simple layer on top of modern OpenGL, instead of polluting the API and spec with old garbage.
Ask yourself this, if immediate mode never existed, would you still say that the ES2.0 method is still "basically impossible to learn"? I assure you that it is very possible to learn.
Did you learn ES2 (i.e. create a buffer object, fill it, pass in attribute pointers, goof up the stride values, do it again, draw)? Or did your first triangle come out via begin/vertex/vertex/vertex/end?
Are you saying that immediate mode should be used in all cases, even for rendering a complex mesh where you will need to spend CPU cycles iterating and uploading vertices to the CPU every frame?
I'm saying that the begin/end APIs should not have been dropped from the spec in ES2. There needs to be some kind of equivalent, because high performances games and driver writers aren't the only users of the API.
If you need a 3D engine, use a 3d engine. If you're writing a 3D engine, you use OpenGL.
OpenGL should start from scratch and change their name and code naming conventions so no-one would be under false illusions that you can take OpenGL code from 1996 and make it work on OpenGL in 2012.
That would have kept JWZ happy and others who complain about the lack of fixed function and immediate mode in OpenGL too.
Btw. once you get past the basics in GL 1.x fixed function, everything becomes /ridiculously/ difficult. Look at how Quake 3 did their "shaders" for example.
The current state of the Mac side of the Steam store is way behind the Windows one, but it's still quite formidable.
I'm reasonably certain that MS even know that releasing Office on Linux will ring their death knell. Although, I do feel it a bit unfair that it's available on the Mac OS.
On Windows big players like Valve are going to be less willing to play sharecropper.
Windows RT does, but that's like comparing OSX to the iPad.
The point I was making is that Steam doesn't work on Microsoft ARM tablets powered by Windows RT, but that shouldn't be a big surprise considering no desktop software works on ARM computers without being recompiled for that platform. It's no different than OSX vs iOS.
Edit: haha I wonder if this article is true about microsoft abandoning 'metro' in favor of 'windows 8-style ui' Seems relevant.
Maybe it has something to do with the Microsoft App Store and MS being the gatekeeper that decides if an app runs or not there. Oh, and by the way getting 30% cut of the price of the software, all software a la Apple.
No matter what kind of company you are, whether a software or hardware one, if you're doing "well" in the current/status quo Windows environment, then Windows 8 is a threat to you, because it will change a lot of things, and most likely for the worse for regular "PC-oriented" companies.
Not really. FPS readings >= monitor refresh rate give skewed results that may not reflect the actual performance. Despite this, it's still used widely in performance benchmarks.
FPS is the new MIPS.
Dual boot on a Sony Vaio FZ-21S, with onboard ASIO chipset, headphones - Pioneer SE-A1000 (Equivalent to a Sennheisser HD - 595) with no equalizers.
I'd just like to point out that you'll have more variations in your sound by just turning the mini-jack connector than by modifying any driver, who do nothing more than copying a stream of digital audio into the buffer of the sound card to be sent to the DAC.
The only area where I've seen changes caused by drivers is on latency, a latency that matters only in a creative context.
That's a grosse simplification of what the audio driver does. Mixing, leveling, EQ, spatiality and a lot of other details are handled by driver as well. My last Linux laptop, running Ubuntu, had an annoying hiss when the levels were at 100. The fix was enabling SSE2, which got disabled everytime the kernel was upgrading. From that example I know the driver is doing far more than streaming audio to the DAC.
The best way to identify any differences is to run the audio into a calibrated measurement device (at the very top end an Audio Precision box) and check the gain, frequency response, jitter, THD+N, etc.
Edit: Will calibrate and let you know mate. Thanks!
Other than that, it's just valve making linux version of steam and tries to promote it.
Which would make the entire comparison completely pointless. To the point of the blog post almost being a trollpost. Valve have a clearly defined agenda here, remember.
This is incorrect. OpenGL is almost feature equivalent to D3D.
Properly done, they should be able to get similar results. If they spend a lot of effort on it, they should be able to get bit accurate results.
For 60 FPS that's an increase to 61 FPS.
> We are using a 32-bit version of Linux temporarily and will run on 64-bit Linux later.
Both Windows and Linux have PAE support, but non-server editions of Windows still limit the physical address space to 4GB (although this might seem to make PAE support pointless, there are other advantages of PAE, like the NX/XD bit used for DEP).
You are also correct that the per-process limit of 4GB of virtual memory is not affected by PAE. In case the advantages of PAE once again seem pointless, remember that a system can execute multiple 32-bit programs each with its own independent address space, so you can have, in theory, eight programs each using its own 4GB on a 32GB system without running out of memory.
In practice, the platform imposes a limit of <4GB for physical memory, with the remainder used for legacy support and hardware DMA. Also, the operating system imposes a limit of <4GB for virtual memory, with the remainder used by the kernel.
Is it because of the video card drivers?
This is part of the reason why games remain 32-bit on Windows as well (that, and compatibility with 32-bit versions of the OS).
However, there are, on x86 at least, other reasons to go 64-bit, perhaps the most prominent of which being the availability of additional general-purpose registers (8 more, to be precise).
An ABI for Linux has been proposed called x32 that would run 64-bit code but use only 32 bits for pointers. This would re-impose the 4GB per-process memory limitation but still allow the use of other 64-bit-only features. AFAIK, x32 programs are not compatible with x64 libraries, and vice versa, though.
I've stayed out of PC gaming for a long time because it seemed to have a fetish for spending lots of money on graphics cards.
EDIT: Am I being downvoted because it's stupid to ask about 32GB because obviously every Real Gamer already has that, or because it's stupid to ask about 32GB because obviously no one needs that?
Downvoting seems somewhat over the top, I'd say.
I was told that frame rates over what you can actually see tells you very little about the actual performance. So implementation A at 315 FPS being 10% faster than B might very well be 50 % slower than B at 60 FPS.
What is interesting is if the frame rate can keep a steady 30/60/120 FPS. Anything above that is just not indicative of actual performance.
Bare in mind that Valve is running this test on a much higher end PC than most consumers are likely to have. People will want to be able to run these games on their $500 laptops , so the difference between 270 and 315 FPS might translate into a difference between 50FPS and 60FPS which would be noticeable.
...or it might not. Which is the problem. You can't just interpolate like that.
Or can you? I have only heresay to go on.
That is a headroom of a factor of 5.25. If you use that headroom to implement better graphics, your performance characteristics will likely change completely.
That's the point. A thousand vs. a million FPS when you draw a single unshaded triangle doesn't necessarily interpolate to predict the performance in a real scenario.
Also, I got bored the other day... I can't help but wonder if Valve would push a desktop of their own devising?
Anyone know of any interesting hires that may have gone under the radar?
Valve creates a Steam client, and gets the Source game engine running awesomely on "Linux". How do all the different desktops factor into this?
No, not really. There is still lots of room for improvement, especially if you want high resolution (retina or large display), 3D, multi-monitor configuration, and/or high levels of anti-aliasing. And providing stable 60FPS on 1080p can be difficult if you have mid-end or couple of generations old hardware.
Those gamers that demand 1080p@60Hz under load are a niche, not the standard for 'pc gamers'. And for those who demand it, shelling out a few extra dollars for hardware that exists (previously it didn't) is much easier than messing around with hardware voltages and crossing your fingers that your gpu is one of the lucky ones.
And frankly, the idea that gamers will flock en masse to linux for a few FPS is utterly absurd.
Oh, and Farmville ;)
Gamers use whatever platform their favorite games are available on.
I know many people are diehard about Linux, but it's a big commitment to switch and it isn't easy. Even so, many users might make the switch to find Linux seems to introduce a lot of headache and perceived restrictions on capabilities that Windows doesn't. Sure, they could probably tweak their way past the difficulties, but after troubleshooting on a wiki or in forums for a couple of hours it would lose the flavor for the masses.
Also, the least hassle I have had with migrating people to linux has been with people who really have no idea about computers, possibly as they have no expectations of how things should be and don't fiddle with things as much.
And besides, there is no real switching. I run osx, windows7 and linux mint, mostly, (often on the same computer, although I have found triple boot on hackintoshes to be a bit temperamental). Using a new OS doesn't mean you no longer use the old one. It doesn't restrict you really at all, it just gives you more perspective and a wider range of expertise. I highly recommend it. Linux mint is not a bad place to start as it is compatible with ubuntu (and therefore debian, to a large extent), but currently has a bit more polish. - http://www.linuxmint.com/download.php