Hacker News new | comments | ask | show | jobs | submit login
Valve Source Engine Running Faster on Linux than Windows (valvesoftware.com)
611 points by learc83 on Aug 2, 2012 | hide | past | web | favorite | 238 comments

I have to say that this is nothing short of a miracle. The OpenGL API is really hairy to implement at all, let alone making it fast. There's 20 years of accumulated legacy and design mistakes that the driver vendors need to care about. D3D, on the other hand has been re-designed on every iteration and backwards compatibility has been broken in every major release.

On the other hand, porting from D3D code, you don't touch the hairy parts of OpenGL at all because D3D is functionally quite close to the "clean parts" of GL. But there's still some software overhead to overcome. They might also be using a proprietary D3D-on-OpenGL layer, like the ones being used for some Mac ports of Windows games (edit: seems like they are not).

I hope that Steam, and gamers that use Mac and Linux in general, will push for bigger changes in OpenGL, both specs and implementations.

disclaimer: I write GPU drivers for living

> disclaimer: I write GPU drivers for living

Is that a reason for us not to take you seriously? Feel free to disclose any connection you have with the subject but please don't disclaim responsibility for your comments. :)

Unless, of course, you want to..

(Sorry for the no-content post but I see this word used in every second thread these days and it's one of the things that might be driving me nuts.)

Disclosure, disclaimer. Both apply. I disclose that I work in the field but it can also be a disclaimer that I have extremely skewed views because of the frustrations that years of working too closely with OpenGL can bring. English is not my native language, sorry for any confusion.

Personally I think "disclaimer" is exactly right. If you work on the drivers (or some other low-level component that's part of the general plumbing), your concerns are not always aligned with those of people writing the actual software. Therefore, if you have a complaint about something, it may be that you should be the one who has to do the hard work rather than the application writers, because making your life easier might not maximize overall happiness.

So, "disclaimer" is spot on, I believe, because application writers might like to shed a tear, but not sympathize too hard...

All I think he's doing is alerting the reader that his perspective might be skewed because of his job. In this particular instance, I think that's useful because it balances out the extra authority his comment otherwise gets from the detail he provides (also presumably from the same job).

Whilst it is dangerous to guess at what people were meaning, I understood the 'disclaimer" to mean "I do this for a living so I know what I am talking about".

So, if that is the correct interpretation then is not a disclaimer, more a disclosure. Which is what noibl was trying to educate exDM69 about.

But it is really all idle speculation until exDM69 chimes in.

Dang, now you've made me look up the definition! Apparently, one should only disclaim when one wants to distance oneself from a previous claim.

Maybe it's time for OpenGL 5.0 to start from scratch, and support only the latest x86, ARM and MIPS chips. It might be the only way to catch-up with DirectX, no? Right now I'm excited about OpenGL ES 3.0 for mobiles, and what will bring to the mobile market.

> Maybe it's time for OpenGL 5.0 to start from scratch

Yes. Someone should do it. The major industry players seem to have the interest but lack the will to put in the resources it requires.

> Right now I'm excited about OpenGL ES 3.0 for mobiles

Prepare for disappointment.

If you prepare for disappointment, you're can't really be disappointed, can you?

You can always be more disappointed.

Spoken like a true OpenGL programmer. :(

I work from the proposition that we are all crazy arrogant monkeys whose main talent is the ability to construct and understand stories.

This means I am constantly really impressed by the fact that we get anything done at all.

Unfortunately, I am also a crazy arrogant monkey, so am also constantly really deeply disappointed by the fact that we haven't all got jet-packs and holiday homes on the moon yet.

Well good version of OpenGl 5.0 would be very welcome(even without complete rewrite), but history shows that ARB(Architectural Review Board) likes to shoot itself in foot[1]. Design by committee usually takes a lot of time, so rewrite would probably take forever given number of legacy OpenGL Cad applications.

Small nitpick: OpenGL would be catching up with Direct3d not with DirectX(which includes parts like input or sound that AFAIK are out of scope for OpenGL)


edit: found HN discussion about this link: http://news.ycombinator.com/item?id=2711231

Thanks for that! I totally forgot about Fahrenheit and I was super excited about it back then and depressed when it got cancelled, ha!

They tried with OpenGL 3, but then it took them years and the plan were scrapped. ( That is when Microsoft brings DirectX 9 to the table, which is well designed and much more usable then OpenGL ). They restarted development a few years ago to clean up the mess.

The main reason why OpenGL did not get a restart were because of pressures from CAD and 3D Modelling Industry.

The cynical view is that having two standards, one consumer and one pro, lets graphics companies ship the same chips, but charge 10x as much for the pro version with a couple driver tweaks. If the consumer games were written against the pro standard (requiring pro support in the consumer cards), how would they justify the pro line?

Why is that community (CAD/3d-modelling) even sitting at the table?

These modern GPUs are very generalized hardware now. Their drivers are "simulating" all kinds of features. Its a bit like OSS vs ALSA. They should just make the GPU a standard compiler target, and offer only the bare minimum (video-mode-setting) at the driver level.

Let the game industry write/design their own API on top. Let the CAD community do the same for their usecases. I know, with Mesa, this is kind of the setup. Except mesa isnt executed by the gpu: it runs on the CPU, and from there it executes code on the GPU.

Wouldnt it be much nicer if GPUs just behaved like CPUs? They are just optimized for different types of proccesing. Thats the interesting end-game: could the CPU be optional?

GPUs still have a ton of performance-critical fixed function units, and actually recently gained a new category of fixed-function unit (tesselation). OpenCL cannot replace OpenGL or D3D.

And the CAD/DCC market gets a say because they pay 10-100x more per seat than consumers.

OpenGL ES 2.0 is close to the "clean parts" too. I think targeting that subset of the API is a great way to keep OpenGL code totally sane and manageable.

GLES2, GL3 and GL4 are the "clean parts" only, OpenGL 1.x features have been removed. But the API is still a mess and has all the design mistakes dating back to 1992.

Isn't "OpenGL - The good parts" actually called OpenGL 2.0 ES?

> Isn't "OpenGL - The good parts" actually called OpenGL 2.0 ES?

No, it is called the GL 4.2 core profile. GL 4+ core profile is a superset of GLES2. It does not include any of the stuff left behind from GL 1.x days.

GLES2, and Big-GL3+ have the legacy graphics features removed but the API is still horrible. More on this in my other comments in this thread.

OpenGL 2.0 ES is absolutely horrible - it's 3 generations behind on hardware capabilities even in mobile devices, for eg. even tough almost all chips on mobile support multiple render targets (MRT) I haven't seen a way to do this in ES with any extension yet, even in WebGL. MRT is some pretty basic SM3 (Direct3D 9) level stuff and is required for a lot of cool rendering techniques.

That's what I was thinking - it's a pretty clean subset, but since it's designed for mobile it is probably missing too many of the more advanced features desktop games tend to use. Perhaps one day OpenGL ES could supersede the desktop OpenGL...?

I'm not sure how that would happen, since by definition mobile chips and OpenGL ES need to be "leaner" and maximize efficiency, to preserve battery life, while on the PC you don't care as much about so the chips are more "bloated", and contain a lot more features, since battery life is not much of a concern. Although we may see a return to efficiency, now that laptops are overtaking PC's, but they still need to get rid of the legacy stuff first.

I think what you meant is that OpenGL ES may one day be more popular and more used than the "full OpenGL", because there are a lot more mobile devices + WebGL.

I don't think jwz (who is much smarter than me) has many good things to say about OpenGL ES


JWZ isn't saying that OpenGL ES isn't the good part of OpenGL, he's saying that nobody should ever have removed things from OpenGL in the first place, because he wants to port ancient code to modern systems.

Another way to take it is that ES removes too much from OpenGL.

In any case, I submit to the opinion of experts - my experiments with OpenGL have been simple and, when I attempted anything more complicated, they resulted in much more frustration than learning.

"because he wants to port xscreensaver code to iOS."

fixed that for you.

"jwz" and "Get off my lawn" are practically synonymous at this point. He's a great hacker, and a fantastic writer, but his priorities are entirely different from most developers.

For those who, like me, use desktop Linux full time, Valve's close collaboration with NVIDIA and AMD (i.e., ATI) to improve the performance of proprietary graphics drivers is a huge win.

Quoting from the blog post: "We’ve been working with NVIDIA, AMD, and Intel to improve graphic driver performance on Linux. They have all been great to work with and have been very committed to having engineers on-site working with our engineers, carefully analyzing the data we see. We have had very rapid turnaround on any bugs we find and it has been invaluable to have people who understand the game, the renderer, the driver, and the hardware working alongside us when attacking these performance issues."

Leaving aside the amazing fact that Valve is achieving higher FPS on Linux than on Windows, the great thing about efforts like this one is that we can all expect the Linux desktop to become even smoother, faster, more seamless. I love it!

Seems to me like a huger win would be if the open-source graphics drivers would get improved.

I'm not even talking ideologically here. The OSS drivers are included along with the kernel in most distros. They're what you get by default when you boot a new Linux install. They get upgraded alongside other core packages by the distro's package manager and are generally quite up-to-date compared to the "official" drivers in the package repository (if they're even there at all).

I use the OSS radeon driver at home and the OSS nouveau driver at work, and both are quite adequate for my purposes (which involve no real 3D). I think it would be a bigger improvement to the ecosystem to have these drivers get some love.

(Or, alternately, for the proprietary drivers to go open-source and be included with the kernel).

> Seems to me like a huger win would be if the open-source graphics drivers would get improved.

Their work with Intel is on the open source drivers.

Show me where I can buy an Intel graphics card. Please.

Near as I can tell, the only Intel display adapters are the ones integrated into laptop motherboards. You may be able to buy the occasional desktop motherboard with integrated Intel graphics, but good luck finding one that can support two DVI monitors or dual-link high-resolution displays or anything fancy like that.

However, the last I looked into this was a while back; perhaps the situation has improved?

They're not integrated into the motherboards, they're built directly into the CPU. This reduces power consumption and allows the motherboard to shrink, no separate component needed.

The HD4000 isn't the most capable chipset on the market but for most applications, light-duty use, it's fine. Even the basic versions of this chipset could support two 24" screens without much trouble. Just don't expect benchmark-shattering 3D performance.

Well, as of ~5 years ago, nobody except power users have desktop computers. Laptops have become the "default" type of computer, and almost all of those have a graphics chip integrated directly to the CPU, not to the motherboard like olden days.

Everyone I know thats in college or below has laptops this is true. But all my family and friends that are ~30+ and not working in IT have desktops at home. I can assure you none of them are power users.

nathanb: Yes, that would be ideal. The way I think about it is this: even though this effort falls far short of the ideal with NVIDIA and AMD, it's still a huge win for the platform.

My impression is that there are issues with 3rd party licensing of various sorts in the proprietary drivers that would prevent open sourcing them even if Nvidia/AMD wanted to.

Of course that probably doesn't prevent them from giving the OSS drivers some love, but that is another story.

The genesis of this was back in 2006 when Microsoft started requiring graphics board/chip makers to keep some aspects of their design secret, to support DRM in Vista. That includes hardware details and code, i.e. open source drivers. See http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.html

I'm actually quite glad that adequate drivers are available, even if they're not open source. That outcome wasn't guaranteed.

It goes further back than that. The software stack for all modern AMD GPUs is based on the original unified shader compiler for the first AMD unified shader GPU, Xenos (XBOX360) written largely by Microsoft. AMD just has a license to the code, MS still owns it.

Any sources for that software stack part? Only information I managed to find was that AMD was contracted to create Xenos core, but there was no mentions who made software (I would assume AMD) and especially no information about licensing.

nVidias argument for not opening up the Linux driver is that it shares a lot of code with the Windows driver. I'm not saying that this argument makes sense, but it's the one they use.

I thought their argument was that some non-trivial amount of code in their driver is licensed somehow and they're thus not free to open source it. It sharing code with the Windows driver is the explanation for how that code got into the Linux driver to begin with.

I don't understand why that prevents them from releasing the source? There's no reason the Windows driver need be closed-source.

That depends, how much of their competitive advantage over AMD is in software and how much of it is hardware?

I might be waving hands here, but I got impression that actual components that do raw processing in GPU are not that complex. Complex part (and which produces the performance) is the pipeline feeding data into those units. GPUs are also programmable chips so even publishing driver that shows how graphic stack on GPU side is set up could potentially reveal whole architecture of that chip.

Or at least documenting the hardware.

> For those who, like me, use desktop Linux full time, [...] is a huge win.

Well I don't know... For the sake of attracting more users, Linux usage tends to become more and more Windows-like. Best illustration is the dropping of network transparency in Wayland because "most people don't care".

I suspect that there is going to be more and more concessions like this.

bnegreve: X.org-like network transparency falls outside the scope of Wayland, so it was never really dropped as a feature.

Moreover, as I understand it, you will be able to run X.org as a Wayland client, so you're not losing anything compared to today's setup. (IOW, you will be able to run remote X11 apps transparently over the network on a Wayland server.) Not only that, but Wayland clients can use any other network protocol -- VNC, RDP, etc.

IMO Wayland too is a big win for the Linux desktop.

> you will be able to run X.org as a Wayland client

Not really, application developed over X.org control the display through a drawing API. This is why network transparency (among other things) is so simple. If people start developing application for Wayland this won't be that simple/elegant.

We are dropping a level of indirection for performance purpose. This is quite unique in the history.

Personally, I prefer VNC to X's network transparency, because all my applications don't get closed if my internet connection drops.

How do better graphics drivers have anything to do with making concessions to become more Windows-like? Or is it using Linux as a full-time desktop operating system you're objecting to?

Better graphics are great, but it's hard to believe that nVidia or Valve will release open source software that can be installed/tested/bugrepported through a package manager.

This is a big lose for stability and usability. Fetching untested software and installing it manually is very Windows-like.

This is huge. 2013 is going to be the year of Linux on the desktop. :P

Here's the submission that linked to the actual blog post, not to the whole damn blog:


And that post didn't make it to the front page, because the title wasn't very descriptive.

I wanted to read some real hacker news discussion, which we're now getting with a better title.

Yes this is a "problem" on HN sometimes. The original posts don't get upvoted, but some summary/repost by another more known site does. Yesterday there was an article that was submitted from multiple sources, but it was only the Engadget one of all the sites got to the front page.

I'm unsure how that's necessarily a problem with HN so much as it's a problem for headline writers. There are a lot of articles hitting /new every hour; it's the original headline writer's job to grab our attention and make us read them.

To be fair, mtgx said it was a problem "on" HN, not "with"...

It was on the front page, I saw it when it was at 40 points or so.

And a worse link.

Steam working with hardware vendors on Linux drivers means everyone in the Linux community benefits. This is fantastic news.

Yeah. And one of anti-linux arguments for Windows fans is just getting killed - gemes do run on Linux. If this experiment will be a success for Valve, we only can wait for other game producers to follow their steps. And taking under consideration that not everybody is going to like Windows 8, this can be a beginning of revolution!

"Windows fans" is a bit strong of a word for many. Does the fact that one needs windows to run some of the software they like makes them a "windows fan" ? I used the three major desktop OSes on a regular basis. My laptop is a MBA running OS X, since that's what works best for me in a mobile computer. My desktop used to have a dual boot of linux for my main, daily computing needs, and Windows for games. I had to stop using linux since I switched to an ATI card that couldn't even tolerably work with Gnome 3.

Free software proponents tend to be less level headed on the issue. I think there aren't a lot of true "windows fans", but on the other hand, there are a lot of "linux fans" who live for the politics of free software. Lots of people use both proprietary and free software. I do. We are the ones who don't care for your politics. Unlike lots of hardcore linux proponents who are actively hostile to closed source software. On purely technical terms, I like linux better than windows because of its ecosystem (terminal software, scripting, unix philosophy) but certainly not because I'm enamored with the politics of the FSF. OTOH I use windows not because I love it, but because I need it. That's being practical.

People who put other groups into boxes, like "anti linux" "Windows fans" aren't practical, they're turning software into a religion. You're even speaking of a "revolution".. for god's sake.

You're reading way too far into that comment. A "windows fan" is the kind of person that says windows is 'better' because 'it has games'. It's not an insinuation of being a crazy fanboy. It's not something that applies to anyone that uses windows for any reason.

You're the only one here treating it as a serious and solemn issue.

Yes, that was exactly my point. Thanks for clarifying. And by the "revolution" I meant, that this may be beginning of serious user migration from Windows to Linux. Of course, there is Ubuntu, Linux for human beeings, some people do use it, but it's still "some", small number, that you do not take under consideration when planning/developing a game.

I'm not an anti-linux Windows fan. I like both of them, and use both of them daily.

But I feel very rejected by the GNU mindset that 'every proprietary software in my platform is bad'.

Thankfully that's a GNU, not Linux mindset. Canonical includes proprietary drivers with Ubuntu because it makes the product better, and Linus himself has said that he supports OSS because it's a superior development model that leads to better products, not because of some half-baked moral claims.

I do agree, I think the ideal software platform will be an OSS OS, and a healthy ecosystem of both proprietary and OSS on top.

> We have been doing some fairly close analysis and it comes down to a few additional microseconds overhead per batch in Direct3D which does not affect OpenGL on Windows. Now that we know the hardware is capable of more performance, we will go back and figure out how to mitigate this effect under Direct3D.

Why wouldn't they switch back to OpenGL on windows then ?

Besides the absolute numbers I wonder how the hardware is behaving regarding resource intensive applications games are. From personnal experience every laptops and computers I have owned had had their fans spinning much louder when running ut2004 (and other games) on linux than on windows. Whether because it's getting hotter for less computational power or because of more aggressive default fan settings in distros.

> Why wouldn't they switch back to OpenGL on windows then ?

Because no-one likes to program with OpenGL. Because of it's history, it is one of the hairiest APIs out there. Try to propose using OpenGL to a game developer and you'll be laughed at.

Many GPU vendors have been doing a bad job with OpenGL windows drivers and they have been really buggy, but the situation is improving.

I write OpenGL because I don't use Windows. I still hate it.

Simply not true. If you're new to real-time 3d I'd ignore this advice. If you're experienced, then you probably have enough info and project-specific knowledge to make up your own mind.

+1 I am having a flippin blast working through this: http://www.arcsynthesis.org/gltut/

Do you write old-and-busted OpenGL or new-hotness OpenGL? Seems fairly clean to me when you leave out all that legacy crack the CAD vendors can't seem to give up.

> Do you write old-and-busted OpenGL or new-hotness OpenGL? Seems fairly clean to me when you leave out all that legacy crack the CAD vendors can't seem to give up.

I usually use the latest GL only. Today that would be GL 4.2 core profile. Or GLES2.

Sure the old fixed function pipeline and immediate mode are gone. But the API still sucks. Bind-to-modify semantics, using integers for handles, global state everywhere, etc. They are the big problems, for both, users and implementors.

What's wrong with bind-to-modify and using integers for handles? What alternatives are there? With my limited knowledge of OpenGL, I know you get a number, describing a resource in video card memory. I bet for an OpenGL implementation, binding a resource id means finding a pointer to the resource. Sounds reasonable to me. Would you rather get an invalid pointer that you should refrain from dereferencing (because you'll be reading from RAM, not GPU RAM)? That would've been a disaster.

And I don't really know what 'global state everywhere' means. Do you mean that you keep OpenGL-related info in your program's global state? In that case, you should be using some AppState object to keep the state (and pass a pointer around). Otherwise, what global state is there "everywhere"?

> What's wrong with bind-to-modify and using integers for handles? What alternatives are there?

  glBindTexture(GL_TEXTURE_2D, my_texture_handle);
  glTexImage2D(GL_TEXTURE_2D, ....);
Why not:

  glTexImage(my_texture_handle, ....);
In addition to being used for accessing and modifying, the bindings are global values that affect rendering. In the example above, it affects which texture is used on the texture unit that has been selected by glActiveTexture. What a mess.

> With my limited knowledge of OpenGL, I know you get a number, describing a resource in video card memory. I bet for an OpenGL implementation, binding a resource id means finding a pointer to the resource. Sounds reasonable to me.

Wrong. There's a lot of software in between, before we start talking about GPU resources, so it refers to a structure on the CPU side. The integer-to-pointer lookup in between is another cache miss that we don't need (it has measurable L1 cache effects). This has, to some degree, been worked around by adding state objects like "Vertex Array Objects" (similar to D3D input layout).

> Would you rather get an invalid pointer that you should refrain from dereferencing (because you'll be reading from RAM, not GPU RAM)? That would've been a disaster.

Opaque pointers are used quite a lot in that way (e.g. D3D, via object pointers), it's a common idiom in C that adds a little type safety and documentation. In this case the pointers would be internal structures in the GL driver's user space. But you should not dereference them anyway. Unix uses integers as handles because the stuff it refers to is inside kernel memory, and you could poison the kernel by giving invalid pointers.

> Do you mean that you keep OpenGL-related info in your program's global state? In that case, you should be using some AppState object to keep the state (and pass a pointer around).

I don't. OpenGL does. In OpenGL, you have a global variable (actually: thread local) called the "context" which contains all the state of the rendering pipeline. In essence, it's a global variable that has side-effects when you access it.

> Why not: > glTexImage(my_texture_handle, ....);


Macros here would make the problem worse. The problem is not notation, it's the call to glBindTexture.

It has 1) inescapable measurable performance overhead and 2) affects state that changes the rendering output.

So if you macro it, you're forcing yourself to calling glBindTexture even in the cases where that wasn't necessary. (say, if you had just called glBindTexture and knew that the right texture was bound anyway). And, in addition, you have to remember to store the previous state of the bound texture in case someone down the execution path wants to use that value. Macroing things which change global state is, in general, a terrible idea.

If you don't macro it, your code has to remember whether the right texture has been bound, and design around that. It can be done for sufficiently simple cases, but the general solution also has performance overhead (checking a local variable before calling glBindTexture will get you into branch misprediction problems)

You can imagine complicated solutions like state batching: writing a better, retained-mode domain-specific language atop OpenGL which will analyze the sequences of your calls before making it, and then remove all the unnecessary glBindTexture, etc. calls. But it's a major pain in the ass that simply shouldn't exist.

You'll still have the driver doing the mapping from id to handle under the hood. It can get to be north of 25% of cpu cycles spent looking this up in some cases.

You can use the EXT_direct_state_access instead of bind-to-modify.

  glBindTexture(GL_TEXTURE_2D, texId);
  glTexImage2D(GL_TEXTURE_2D, ...);

  glTextureImage2DEXT(texId, GL_TEXTURE_2D, ...);

EXT_direct_state_access is not very well maintained. And bind-to-modify is just one problem, there are others. Everything is mutable in OpenGL (a stark contrast to D3D), and that makes things complicated.

I thought stuff like SDL was used precisely because it abstracted away all the crappy parts of openGL and gave you an easier API to work with?

> I thought stuff like SDL was used precisely because it abstracted away all the crappy parts of openGL and gave you an easier API to work with?

No. SDL only deals with the platform specific parts like windowing and OpenGL initialization. It does not do any OpenGL for you.

GLFW is another popular tool for the same purpose.

If it meaningfully abstracted away the crap as it was originally billed to do, yes. It doesn't really do that, IMO, and mostly just makes you deal with even more mental state. It's very common to see people use SDL more or less only for creating a window and pulling input (which it does do pretty well).

SFML is better, but the developers are pretty uninterested (SFML 1.x was declared "legacy" while there was a bug that prevented it from running on modern AMD graphics cards) and I wouldn't trust it as far as I could throw it.

Can you explain why you wouldn't trust SFML? I've been using it for a few months now and haven't had any problems.

I don't trust people who leap to a new, breaking major version while ~40% of people can't run the in-theory-supported old one. Their prerogative, but I won't use their library because of it.

id Software writes all of it's games using OpenGL, and they always create state of the art engines

Yes but Carmack also said that he now prefers D3D[1] and that difference between OpenGl and D3d does not really matter: "It’s interesting how little of the technology cares what API you’re using and what generation of the technology you’re on. You’ve got a small handful of files that care about what API they’re on, and millions of lines of code that are agnostic to the platform that they’re on." [2]


[2] http://www.maximumpc.com/article/features/e3_2008_the_john_c...

It isn't because OpenGL is better. It's simply inertia. Don't take my word for it - take Carmack's.


I do all my graphics work in OpenGL, but I'm aware of its shortcomings.

Try to propose using OpenGL to a game developer and you'll be laughed at.

OpenGL dominates the massive mobile industry (those billion or so Android and iOS handsets run and live on OGL ES 2.0), and OpenGL was the premiere desktop gaming API until relatively recently.

Then it fell behind not because it was a defective API, but rather because the OpenGL working group was far too slow to adapt to hardware improvements -- mired in infighting and bureaucratic BS -- while DirectX raced ahead with things like geometry shaders.

Of course mobile has completely changed the equation, and suddenly people are writing OpenGL engines that only make sense to use against all other targets (Windows, Linux, etc)

> Why wouldn't they switch back to OpenGL on windows then ?

Because for many cards, the opengl drivers are awful. If you are only implementing one of DirectX and OpenGL on windows, there really is no contest. Of course, you could do both, but then you have another option to give users, and two very different codepaths to test.

The abundance of awful OpenGL drivers is one of the reasons given for the ANGLE project (http://code.google.com/p/angleproject/). I've often wondered what those cards are though? NVIDIA drivers at least always seemed to have solid OpenGL support.

AMD (ATI) GPU drivers are the worst when it comes to OpenGL. id software is one of the few big companies that makes OpenGL games and they always run much slower on ATI cards (Doom 3 during the first few years of its release) or run with nearly game breaking glitches (Rage, textures that pop up 3 seconds too late on the screen)

John Carmack was seriously angry about ATI's drivers when the game "Rage" was released. http://kotaku.com/5847761/why-was-the-pc-launch-of-rage-such...

"The driver issues at launch have been a real cluster !@#$, [...] When launch day came around and the wrong driver got released, half of our PC customers got a product that basically didn't work. The fact that the working driver has incompatibilities with other titles doesn't help either. Issues with older / lower end /exotic setups are to be expected on a PC release, but we were not happy with the experience on what should be prime platforms."

I unfortunately made the mistake to buy an AMD 5770 when I assembled my last desktop computer. This hateful GPU. If you even care one bit about either OpenGL or Linux, you are *ed with AMD. The open source drivers for linux are too slow for anything that matters and the closed source drivers made by ATi are too buggy to be used at all, they couldn't even run Gnome 3 without so many glitches, for so many months without improvements to the drivers, that I felt true despair.

You want OpenGL or/and Linux, and you need performance ? Get a Nvidia. There's nothing but Nvidia that works well with either. If you don't need performance, then simply get something that has an integrated intel chip.

I will never, ever think about buying anything from AMD again. Pure garbage. Their newer CPU line, the Bulldozer, is also garbage that runs slower than the older generation on many tasks.

I've talked to some people who've worked on AMD (ATI) driver code, and at the time it was a mountain of unmaintainable co-op student code. The co-ops were allowed to write their crappy code, then when they left there was that much more technical debt.

On the other hand, Nvidia still has no proper support for multiple monitors under Linux. Yes, you can have two and yes, you can rotate them. But only both at the same time. Luckily, the opensource driver seems to work quite nicely on my system.

[OT: Why is your nick green?]

I know nvidia's support is not perfect. But frankly which is better, something that's not perfect but achieving "good enough" support, or something like ATi that has so many bugs you couldn't even use Gnome 3, a desktop environment ? http://www.phoronix.com/scan.php?page=news_item&px=MTAyN... http://ati.cchtml.com/show_bug.cgi?id=99

I couldn't run any game under wine either without crashing, even older games that usually run ok like Baldur's Gate 2. The open source driver are much more stable but on the other hand they are MUCH slower. Like, turning a good, midrange card like the Radeon 5770, into what low end crap would do in Windows. But you're better off with that mid range or a high end card since a low end card is not going to be usable at all for anything that makes use of the GPU, with the open source drivers running even free games like Xonotic on a low end card is plain crazy.


On a game like Warsow, the difference between the open source driver and the catalyst can be as high as 34 fps vs 374 fps (34 fps is not enough for a fluid gaming experience, you need at least 50 to 60. Gaming is not like the 24 fps of motion pictures with each frame blurred giving the illusion of movement even with a low framerate since each frame rendered in a 3d game are sharp as hell.) on a high end card, and 17 fps vs 45 fps on a low end card, which means that the game is not playable at all on the open source driver, while it's tolerable on the catalyst.

AMD's stuff is not worth the pain. Don't buy AMD unless you have a masochist streak. I wish I had known before I built my desktop, I was being influenced by all the free software maniacs who were praising AMD to heaven because of their policy on giving out the GPU specs. Free software be damned, I will take nvidia's closed source driver over shit that is not working the next time. My previous GPU was a Nvidia Geforce 7600 GT and it worked like a clock.

Just wanted to say that I have the same graphics card and the exact same problems with it (Gnome 3 even with all the patches and everything would still "restart" once per hour at least), so seriously if you are interested in running Linux avoid AMD.

And here I hoped that since the last time I tried that it would finally work.. many months have passed since the last time I gave Linux a go on this desktop. The fact that you're saying that you're still having this problem now is not making me feel any happier.

... if in the next 6 months the situation doesn't get any better I feel I am going to physically destroy the GPU with a hammer, send the pic to AMD and buy a nvidia before the need for a hardware upgrade even shows up. The fact that it has drivers problems even under windows doesn't help making AMD's case.

>The fact that you're saying that you're still having this problem now is not making me feel any happier.

TBH I also gave up about a month and a half ago it could have changed recently but I doubt it.

No, avoid proprietary drivers. The open source ATI drivers are working beautifully for me. They're fast enough for everything I do, and they've been rock solid for me.

>They're fast enough for everything I do

Waay to slow for gaming (I tried HoN and NVN) also WebGL was really slow (software rendering kind slow), since this is a thread about games I think this use case is required - I didn't try the OSS drivers in ~6 months so it could have changed. But Gnome/desktop was really stable so if you only need desktop apps it does work.

> On the other hand, Nvidia still has no proper support for multiple monitors under Linux.

Multi monitor support should be fixed, latest Nvidia beta drivers have Xrandr. It's still a new feature so it can have a few rough edges, though.

Woah, you are right, thank you! So far, the beta driver seems to run somewhat smoother than the nouveau-version. 3D is certainly faster.

Nicks are green for users who have made less than a certain amount of comments. I think it's either 3 or 5, but I could be wrong.

Thanks, I always wondered about this. The threshold seems to be a bit higher though: Nicole060 has made 8 comments as of now.

I will never, ever think about buying anything from AMD again. Pure garbage. Their newer CPU line, the Bulldozer, is also garbage that runs slower than the older generation on many tasks.

In fairness to AMD, AIUI that is by design, with the intention to increase clock speeds and core counts to compensate. However, that line of thinking was the downfall of the Pentium 4.

> You want OpenGL or/and Linux, and you need performance ? Get a Nvidia.

Unfortunately they've dropped laptop support so not really an option these days. The open source Intel/Radeon drivers in Mesa look to be the only game in town in the future.

edit: to expand, they've said many times that they're not going to support Optimus and that covers all except the luggable laptops these days.

You can simply disable the intel gpu in most of these.

I must have bad luck so far then. Can you remember examples?

As someone who bought AMD/ATi to 'support' the idea of a more open linux, I regret the decision daily. Very little works on linux and performance on windows (I dual boot for gaming), is not very good.

I'll never buy ATi again after my experience with them.

Yeah even when your run windows AMD's still not worth the pain. OpenGL games like Rage are bugged on release for AMD owners and the catalyst drivers often introduce terrible bugs. I don't upgrade my drivers unless I the older ones have a known bug with a newer game. http://www.youtube.com/watch?v=LsD3y3vnsM8 http://www.youtube.com/watch?v=2pSHOJKcFU8 (the youtube comment is wrong, it didn't happen only with skyrim, I had the antialiasing bug with lots of games after I upgraded to that version of the catalyst at the time) This is what happens (and happened to me) when you trust AMD with their drivers. That one of their mid or high end card can work like shit with one of the best sellers on the PC platform says it all about how much AMD cares for their customers : they're giving us the finger.

How could AMD stay in business with such subpar products is beyond me. I know they'll never regain my trust. That's just not possible. Too many problems on too many platforms.

Because of the Windows world that generally don't care about OpenGL, of course.

Its funny but I have the opposite experience - the following amd devices have been rock solid for me on both windows and debian

a) radeon 3850

b) fusion e-350

c) a mobile x700

The only nvidia chip I have is some version of a mobile quadro and it has very bad thermals and will frequently segfault.

That's not really the opposite experience. Bad cooling on a piece of mobile hardware isn't nvidia's fault.

I'm glad for your sake that you haven't hit any driver bugs though.

> I've often wondered what those cards are though?

Intel cards, famous for lying about which set of OpenGL extensions are really available in hardware.

All the GPUs made in Taiwan and similar countries delivered with cheap desktop PCs (< 500 euro).

Last time I checked, Windows OpenGL support was half-hearted since the Vista days. From Microsoft's PoV, making OpenGL a viable target on Windows would make porting games to other platforms easier. My bet is that they'll try to avoid it if they can.

Games like L4D are designed to run on Xbox which uses DirectX

What do you mean "designed to run"? Most big studios have a layer between their application code and the OpenGL or D3D interface to make it portable. They're no more designed for DirectX than they are for OpenGL.

I might be the only sceptic here, but the fact that it seems that Valve has conflict of interest when it comes to Microsoft lately (see Gabe's comments) makes me doubt the authenticity of these results.

I'd love to see real performance improvements when it comes to Linux, but right now I just see a company that (it seems) has something to gain by smearing Microsoft giving out results without any methodology or explanation. They never even said how they avoided the typical pitfall of "FPS" benchmarking - making sure the GPU and the engine is actually drawing the same things (that means making sure all the same shaders are executed, similar extensions are used so that image quality is identical).

Until we get some confirmation in the wild I'll remain highly sceptical towards claims on the blog.

Bear in mind that Left 4 Dead 2 is still using DirectX 9! The exact performance issues they are having problems with are fixed in later versions, as well as them adding speed improvements in many other areas too. If they spent as much time optimizing the DirectX renderer as they have done with the OpenGL one (which, bear in mind, is hardly work from scratch either as it's in use on the PS3 and OS X versions of their games already) then the DirectX renderer would probably be out in front.

You're not the only one. Valve are not the first people to port Windows games to Linux, and while some people claim they're faster, the general benchmarked consensus I've seen over the years is that both platform are, within 10% or so, basically identical.

Their results are basically within 15-20%. The major achievement is just having acceptable performance. After that there tends to be so many other factors that you can tweak that only two metrics make sense: FPS and image quality without any non-automatic tweaks, and then the maximum that can be obtained by extensive optimization.

For most people, they just want the game to run such that the difference is not big enough to warrant switching OSes for gaming. For me, I just want it to be close enough that I can't be bothered to reboot.

I've been getting some great performance through wine for some games for quite a while now. Under the same settings I've occasionally had slightly better or slightly worse performance depending on the game, patch-level and settings. For me the bar is whether it is playable and looks decent at all.

They said that some of their optimizations applied to both Linux and Windows, bringing the final scores to 315 and 303.4 fps respectively. That's well under 10%. Seems believable to me.

I played the Windows versions of many games before then on Linux and the Linux versions always were faster to me.

I've noticed that quakelive runs noticeably faster on linux than on windows. strangely though this doesnt seem to be related to fps, but rather appears to stem from mouse latency (in other words the game just appears more responsive on linux) - has anyone made similar observations?

Could be the mouse acceleration.. great for general use, rubbish for gaming.

Windows makes you jump through many hoops (think registry hacks) in order to disable that function on almost any version of their OS. I don't know of any Linux distros (correct me if I'm wrong here) that do that by default.

Disabling mouse acceleration in Windows is a single checkbox exactly where you'd expect it: in the mouse control panel.

Disabling it in Linux: depends on the desktop environment. KDE has it on by default, can be disabled in much the same way as on Windows. Same for the GNOME desktop in Ubuntu: enabled by default.

Disabling mouse acceleration if your mouses' firmware does it: hopeless in Linux and in Windows. If you're lucky the mouse vendor has some Windows program that allows access to it.

Note that in all cases, disabling is easier or possible on Windows than on Linux.

>Disabling mouse acceleration in Windows is a single checkbox exactly where you'd expect it: in the mouse control panel.

No it isn't. If you uncheck the pointer precision box, there's still a level of acceleration happening behind the scenes, as late as Windows 7, that still requires registry changes to disable.


The post actually explains this is only needed for some old Windows games (predating Windows XP!) that mistakenly re-enable mouse acceleration themselves.

Quake, mentioned by the OP, is one of those old games where this fix is needed. You'd know this if you played Counter-Strike :)

In my experience Linux mouse acceleration is actually more customisable than in Windows; X11 has several related options that can be added to your xorg.conf or changed without restarting X with xinput.

http://www.x.org/wiki/Development/Documentation/PointerAccel... has a good rundown of all the options, perhaps the most important being the "Acceleration Profile", which can be set to -1 to disable acceleration.

As for firmware related acceleration, I've not ran into any - but I can certainly imagine this being something easier to manage under Windows, considering practically all "gaming" mice are aimed at Windows users.

As for firmware related acceleration, I've not ran into any - but I can certainly imagine this being something easier to manage under Windows, considering practically all "gaming" mice are aimed at Windows users.

It's a feature of some mice. If it can't be disabled, it's a misfeature for gaming. Curiously some gaming mice are sold with acceleration that can't be disabled: http://www.saunalahti.fi/~cse/temp/mice.html

As a counterpoint to the "OpenGL needs to start from scratch" discussion, take a look at JWZs blog about porting XScreenSaver to iOS: http://www.jwz.org/blog/2012/06/i-have-ported-xscreensaver-t...

Indeed. The death of the immediate draw APIs is something that only driver writers and serious optimization wonks think is a good idea. For everyone else, it simply makes ES2 an API that is basically impossible to learn. Without exception, everyone I know who does any 3D work at all learned first on 1.1 doing flat shaded fixed function stuff.

And all those APIs are gone. The amount of code required in ES2 to put anything on the screen at all is staggering.

But that said: the per-vertex APIs really are broken from an optimization sense, and serious apps absolutely must do their work on buffer objects. (Display lists have historically been a way to cheat this, at the expense of huge complexity in the driver. This is one of the bad APIs exDM69 is talking about, I suspect.)

Though they are very handy, they tend to promote a misunderstanding of the pipeline that can hinder later OpenGL work. Learning the modern way right from the start saves a lot of annoyances, especially as the streaming interfaces have some different weirdness when compared with the newer stuff.

Honestly, I view the old GL stuff as a sort of toy language for graphics--and yes, it's annoying to write the boilerplate (oh so much boilerplate) to get a spinning quad onscreen nowadays.

At the same time, if people need a toy language to get into graphics, we really should just write a simple layer on top of modern OpenGL, instead of polluting the API and spec with old garbage.

There is really no use for immediate mode anymore; I'm of the opinion that if you lament the loss of immediate mode, you simply don't want to learn the newer, more efficient ways of doing the same thing.

Ask yourself this, if immediate mode never existed, would you still say that the ES2.0 method is still "basically impossible to learn"? I assure you that it is very possible to learn.

Terminology quibble: you don't mean "immediate mode" in the sense it's been used in the community. OpenGL is still an immediate mode API under all circumstances. There is no scene graph. I probably should have said "streamed vertex" or "begin/end" to make things clearer.

Did you learn ES2 (i.e. create a buffer object, fill it, pass in attribute pointers, goof up the stride values, do it again, draw)? Or did your first triangle come out via begin/vertex/vertex/vertex/end?

My first triangle came out when there was no programmable pipeline, so yep, via immediate mode (any by that I mean begin/end). Then, when OpenGL 2.0 came out, I learned how to use buffers to do it instead.

Are you saying that immediate mode should be used in all cases, even for rendering a complex mesh where you will need to spend CPU cycles iterating and uploading vertices to the CPU every frame?

Goodness no. I think you missed the last paragraph I wrote in the note you responded to.

I'm saying that the begin/end APIs should not have been dropped from the spec in ES2. There needs to be some kind of equivalent, because high performances games and driver writers aren't the only users of the API.

It seems like the API could probably be replicated in glu or glut, so people who just want the fixed pipeline could get it easily.

Sure, it could be. But the problem was it wasn't, so ES2 came out and effectively abandoned huge chunks of existing code the relied on the fixed function pipeline. And they had no answer for what people were supposed to replace it with. Thus the aggrieved flaming from people like jwz who just want to port some software.

Immediate mode is very useful for drawing a single triangle that fully covers the viewport, something that is quite common for any sort of image processing or post processing. In these cases your performance will entirely be limited by the complexity of your fragment shader or by raw fill rate and not by the method you used to submit the vertices to the GPU.

Agreed, and I would also add that the overhead in terms of development time to set up the boilerplate to render this triangle via immediate mode and the newer methods would be comparable.

OpenGL's role in the software ecosystem has changed since GL 1.x was relevant. Back then it was used as a 3D rendering API (with lights and cameras and such). Today it is used as the standard interface to programmable graphics hardware. GL2+ knows almost nothing about "3d" or lighting and is essentially used for drawing triangles, very fast.

If you need a 3D engine, use a 3d engine. If you're writing a 3D engine, you use OpenGL.

OpenGL should start from scratch and change their name and code naming conventions so no-one would be under false illusions that you can take OpenGL code from 1996 and make it work on OpenGL in 2012.

That would have kept JWZ happy and others who complain about the lack of fixed function and immediate mode in OpenGL too.

Btw. once you get past the basics in GL 1.x fixed function, everything becomes /ridiculously/ difficult. Look at how Quake 3 did their "shaders" for example.

2013 - the year of the Linux desktop! We always knew it would happen, eventually.

Rejoice! Now the 'But does it play X' game meme can finally die!

At least for values of X that are associated with Valve...

This is just the proverbial thin end of the wedge. Look what happened with Steam on Mac OS, first they got Steam and the Source games running, then more and more developers started releasing games on Steam for Mac (some of them even being cross compatible between systems).

The current state of the Mac side of the Steam store is way behind the Windows one, but it's still quite formidable.

Even with a boatload of patience and very, very low expectations calling the current state of Mac gaming "formidable" is extremely charitable. It's better than it was pre-Steam & pre-App Store, but that's not saying much. I'll get excited when big studios other than Valve and Blizzard are targeting the Mac as a first-party platform rather than relying on half-assed third-party Cider or Wine-based ports that barely work on even the latest hardware.

I believe that Year of the Linux desktop will truly gain momentum when Microsoft Office or something as powerful as that, is released on Linux. Does the average user need the full power of Excel or Access? No. But the familiarity with the brand name and interface of Office (no one can dispute that this is probably the most widely used software, in enterprise and home environment) will not allow them to switch to another office package. Many home users are not aware of the fact that there can indeed exist multiple software to edit the same file-type.

I'm reasonably certain that MS even know that releasing Office on Linux will ring their death knell. Although, I do feel it a bit unfair that it's available on the Mac OS.


I still would like to know what Valve and Blizzard have encountered with Win8 that makes it trouble. I have not had any problems with Steam and WoW. Maybe they do mean drivers though..

The VentureBeat interview [1] makes it look like gaben is referring to the changes in app ecosystem (eg. integrated Windows Store with 30% margins) rather than a specific technical problem. Although I suspect a major games studio coming to Linux might also enjoy the ability to poke around the source for the few graphics drivers which are open.

1. http://venturebeat.com/2012/07/25/valves-gabe-newell-talks/

Seems likely. I think Apple only got away with the shift to the Mac App store because the market for third-party Mac software is minuscule compared to that of Windows and includes a lot of smaller devs for whom relinquishing control of sales and distribution is as much a blessing as a liability.

On Windows big players like Valve are going to be less willing to play sharecropper.

I believe that OS X doesn't prevent you installing your own "App Store", so you can get Steam on OS X.

Windows 8 doesn't prevent you from installing Steam, either. I'm running it right now on the Release Preview.

Windows RT does, but that's like comparing OSX to the iPad.

That's because Steam isn't a Windows 8 program. If you want to use Metro you have to go through the App Store.

Steam is a Windows 8 program. It's not a WinRT program. Steam works perfectly in Windows 8.

The point I was making is that Steam doesn't work on Microsoft ARM tablets powered by Windows RT, but that shouldn't be a big surprise considering no desktop software works on ARM computers without being recompiled for that platform. It's no different than OSX vs iOS.

You're ignoring WinRT on x86. This is where people are upset about Microsoft's sudden monopolizing. I don't consider it a true 'Windows 8' app if it's tucked away in the legacy desktop.

You're technically correct, but you're really just splitting hairs. If it runs on Windows 8, it's a Windows 8 program. Windows 8 was designed to run both desktop and Metro applications. We're not talking XP Mode from Windows 7 here.

The desktop is even more segregated from metro than XP mode was (since you had the seamless option for XP mode). As far as I can tell the point of Windows 8 is Metro and everything related, so I don't feel like it's splitting hairs to focus on it. If you want to get in on that new Windows 8 touchable tiled wonderful candy unavoidable primary interface you have to play by microsoft's app store rules.

Edit: haha I wonder if this article is true about microsoft abandoning 'metro' in favor of 'windows 8-style ui' Seems relevant.

The point is that WinRT will be the default setting on many machines, adding an extra step (switch to legacy) before a user can install Steam. For your average user, this may prove a step too far, which will create a real business threat to Steam.

I would imagine all major gaming engine companies already have source code deals with the graphics card vendors, and vice versa. It's the kinda thing where everybody wants compatibility with each other and it wouldn't make sense not to share it.

Does the Steam store not have margins?

Oh, I don't know...

Maybe it has something to do with the Microsoft App Store and MS being the gatekeeper that decides if an app runs or not there. Oh, and by the way getting 30% cut of the price of the software, all software a la Apple.

Windows 8 is a change in direction for the Windows app ecosystem and for the user interface as well. Valve games make a lot less sense on a touch interface, and they worry that's where Microsoft is quickly heading - to a Metro-only future. A future where everything also has to come through the app store.

No matter what kind of company you are, whether a software or hardware one, if you're doing "well" in the current/status quo Windows environment, then Windows 8 is a threat to you, because it will change a lot of things, and most likely for the worse for regular "PC-oriented" companies.

Maybe the new app store, which may drive down their profits from steam a little ;)

In their one test app they observed FPS values of 315 on Linux, and 303.4 on Windows. This seems small enough that it's a stretch to call it "trouble" on Windows--probably wouldn't even notice it.

Given that screens physically refresh at 60Hz these days, you definitely wouldn't notice it. High FPS numbers are useful as a guide to efficiency, not subjective experience.

120Hz if we are talking about gaming screens.

> High FPS numbers are useful as a guide to efficiency, not subjective experience.

Not really. FPS readings >= monitor refresh rate give skewed results that may not reflect the actual performance. Despite this, it's still used widely in performance benchmarks.

FPS is the new MIPS.

303 in OpenGL on Windows. 270 in DirectX.

Yep, most probably. For ex: I have had audiophile-grade sound quality differences in Linux (Ubuntu 10.04) and Windows (Windows 7) due to the drivers [1].

[1]Dual boot on a Sony Vaio FZ-21S, with onboard ASIO chipset, headphones - Pioneer SE-A1000 (Equivalent to a Sennheisser HD - 595) with no equalizers.

The only way to make "audiophile-grade" comparisons is through blind tests in an appropriate room.

I'd just like to point out that you'll have more variations in your sound by just turning the mini-jack connector than by modifying any driver, who do nothing more than copying a stream of digital audio into the buffer of the sound card to be sent to the DAC.

The only area where I've seen changes caused by drivers is on latency, a latency that matters only in a creative context.

who do nothing more than copying a stream of digital audio into the buffer of the sound card to be sent to the DAC

That's a grosse simplification of what the audio driver does. Mixing, leveling, EQ, spatiality and a lot of other details are handled by driver as well. My last Linux laptop, running Ubuntu, had an annoying hiss when the levels were at 100. The fix was enabling SSE2, which got disabled everytime the kernel was upgrading. From that example I know the driver is doing far more than streaming audio to the DAC.

Hmmm, maybe..I suspect it could be something to do with the ASIO drivers, because there is some "spatial-ness" enabled when I use Linux instead of Windows for such stuff. To verify this, I tried playing a FLAC ripped directly from an Audio CD I had @ 1140kbps approx. and I'm not sure how to put it, but I can confirm there was a "spatial-ness" and more instrument-level distinguish-ability while on Linux. Maybe its specific to just this laptop? Who knows...

An increased "clarity" or "spaciousness" of sound can be caused by anything from a slight increase in total gain (even as small as 0.5dB) to bypassing or enabling bass and treble controls, even if the controls are set to zero. Some sound chips also have "3D" effects which mix inverted and/or delayed copies of the left and right channels with each other, that can increase or decrease perceived spaciousness. Differences in power management approaches between the operating systems can increase or decrease electrical noise in the system (more noise means finer details are masked), and could also affect the accuracy of the sample rate clock, resulting in different amounts or types of sample jitter between OSes. There are other DAC parameters (i.e. DC bias -- some chips might be able to produce a balanced signal between -Vs and +Vs, but only if they have a negative supply; otherwise they have to fall back to 0V to Vs) that Linux may be programming differently.

The best way to identify any differences is to run the audio into a calibrated measurement device (at the very top end an Audio Precision box) and check the gain, frequency response, jitter, THD+N, etc.

Thank you, this is a very clear and crisp answer. Something I definitely wanted to know about! Cheers!

Edit: Will calibrate and let you know mate. Thanks!

These kinds of things can be surprisingly subjective. An audio driver in this case shouldn't be doing much more than shoveling bytes into a device buffer.

An example of expectations shaping opinion in audio: I remember walking into a small upmarket shopping mall and thinking "wow, the sound system in here is amazing - it sounds just like there's an actual piano being played". I turned the corner, and there was a pianist on a baby grand...

Latency can be a problem also when listening to music. HP Power Manager in recent versions (January to June 2012 verified on several EliteBooks) adds so much latency once each 30 seconds that even I can hear it. I even lost entire keypresses.

DPC Latency maybe? http://social.technet.microsoft.com/Forums/en/w8itproperf/th...

Other than that, it's just valve making linux version of steam and tries to promote it.

I don't think it has anything to do with the OS itself. At least not in a "oh well now making games for Windows is 50 times harder!" way. It's mostly a disagreement with MS's push for an Apple-like app store for a DESKTOP OS.

They don't state whether the comparison between DX and OGL was running identical quality rendering or not. Given that OGL doesn't support many niceties that DX does; one can assume not.

Which would make the entire comparison completely pointless. To the point of the blog post almost being a trollpost. Valve have a clearly defined agenda here, remember.

> Given that OGL doesn't support many niceties that DX does; one can assume not.

This is incorrect. OpenGL is almost feature equivalent to D3D.

Properly done, they should be able to get similar results. If they spend a lot of effort on it, they should be able to get bit accurate results.

The blog post makes no mention of the steps they went through to achieve quality parity though. So one can assume that they did not.

OpenGL 4 and D3D 11 are pretty much feature equivalent.

And the speed difference measured is also "pretty much" insignificant.

It doesn't look significant because the game is old and runs already pretty fast on modern hardware. If it was more gpu intensive game, a 20% difference could make a difference between a tolerably fluid framerate and a jerky framerate.

20% is quite significant. If it was 50% better, then we'd already be saying that DirectX performance is cr*p compared to OpenGL. But 20% is definitely a decent boost, especially since you'd expect DirectX to fare better than OpenGL.

To be fair though, if you would take a game that seriously stresses the GPU you wouldn't see a 20% difference. You would probably see about the same constant difference of about 0.5 ms.

For 60 FPS that's an increase to 61 FPS.

I don't think it's fair to compare DirectX and optimized OpenGL. After they optimized OpenGL implementation on Windows they got: 315/303 = 1.04. That is 4%.

i know some people would claim certain Wine games ran faster in Linux than Windows (WoW) but it turned out a lot of the advanced shaders and things weren't running through Wine causing the performance increase.

I wish they'd add Wine as a 3rd 'OS' for comparison. Currently they're comparing Ubuntu 12.04 native vs Windows 7, would be really interesting to add the benchmarks for the Windows version running on the latest Wine on Ubuntu 12.04.

Take these results with a grain of salt:

> We are using a 32-bit version of Linux temporarily and will run on 64-bit Linux later.

Why? Aren't 64 bit versions supposed to be even faster?

I wonder whether larger pointers could result in being able to cache less media. But that's a wild guess.

At the same time you're executing fewer instructions, because 64b instruction set is a bit more powerful. There's a balance between many things here.

If they use the X32 ABI, that won't be a problem.

Ah yes of course. But if so, wouldn't they have issues interacting with the driver, since they have to be x64?

AMD64 would bring more cache pressure on the CPU, but also a lot more memory for caching the world, plus more registers, a better instruction set… For a game, I expect memory would make a large difference.

Maybe a stupid question and I've missed something, but why even bother telling us they're using 32GBs RAM for benchmarking if they're using 32-bit Ubuntu? Just to be totally transparent? Is Windows a 32-bit W7 too?

I'm not sure if this is the right answer or not, but the Linux kernel has supported PAE[1] for ages, which allows for more than 4GB of RAM in a 32-bit OS (though it remains unusable by a single process, if I understood that correctly). Maybe someone else with a better understanding of the 32bit/64bit differences can chime in.

[1]: https://en.wikipedia.org/wiki/Physical_Address_Extension

PAE would definitely enable them to use all 32GBs of physical memory.

Both Windows and Linux have PAE support, but non-server editions of Windows still limit the physical address space to 4GB (although this might seem to make PAE support pointless, there are other advantages of PAE, like the NX/XD bit used for DEP).

You are also correct that the per-process limit of 4GB of virtual memory is not affected by PAE. In case the advantages of PAE once again seem pointless, remember that a system can execute multiple 32-bit programs each with its own independent address space, so you can have, in theory, eight programs each using its own 4GB on a 32GB system without running out of memory.

In practice, the platform imposes a limit of <4GB for physical memory, with the remainder used for legacy support and hardware DMA. Also, the operating system imposes a limit of <4GB for virtual memory, with the remainder used by the kernel.

I've only dabbled in Linux, but what's the advantage of using 32-bit Ubuntu? Why not the 64-bit version? Especially in the Linux world where you deal with binaries less-often compared to Windows.

Is it because of the video card drivers?

All things being equal, on 32-bit systems, pointers are 32 bits wide, and on 64-bit systems, pointers are 64 bits wide. So if your program previously used X bytes of memory for storing pointers, it would now use 2X bytes. Depending on how pointer-heavy your memory usage is, the impact of this change could be anywhere from negligible to significant.

This is part of the reason why games remain 32-bit on Windows as well (that, and compatibility with 32-bit versions of the OS).

However, there are, on x86 at least, other reasons to go 64-bit, perhaps the most prominent of which being the availability of additional general-purpose registers (8 more, to be precise).

An ABI for Linux has been proposed called x32 that would run 64-bit code but use only 32 bits for pointers. This would re-impose the 4GB per-process memory limitation but still allow the use of other 64-bit-only features. AFAIK, x32 programs are not compatible with x64 libraries, and vice versa, though.

Am I expected to have 32GB of RAM to play Steam games? Even my office computer only has 8GB of RAM.

I've stayed out of PC gaming for a long time because it seemed to have a fetish for spending lots of money on graphics cards.

EDIT: Am I being downvoted because it's stupid to ask about 32GB because obviously every Real Gamer already has that, or because it's stupid to ask about 32GB because obviously no one needs that?

I suspect you're being downvoted because no, you're definitely not expected to have 32GB of RAM. I'd imagine the example just uses a high-end config to show that it's not a hardware issue bottle necking performance (or, rather, not a hardware issue you'd normally run into - e.g. not enough memory, too slow CPU etc).

Downvoting seems somewhat over the top, I'd say.

I don't think this game will use 3 GBs of RAM. If that's the case it doesn't matter that much.

315 FPS?

I was told that frame rates over what you can actually see tells you very little about the actual performance. So implementation A at 315 FPS being 10% faster than B might very well be 50 % slower than B at 60 FPS.

What is interesting is if the frame rate can keep a steady 30/60/120 FPS. Anything above that is just not indicative of actual performance.

Surely if they are both doing 60fps then by definition the performance will be the same? I guess the time spent doing the actual rendering vs waiting to do the next frame might be different, not sure if there is a reason that the characteristics would be different at 300FPS and 60FPS though since surely the only difference is how often the render code is called?

Bare in mind that Valve is running this test on a much higher end PC than most consumers are likely to have. People will want to be able to run these games on their $500 laptops , so the difference between 270 and 315 FPS might translate into a difference between 50FPS and 60FPS which would be noticeable.

If the code is "rendering" 300 FPS but only displaying 60, it could be doing anywhere from 1x to 5x as much work as if it were rendering at 60, depending on arbitrary driver internals. So it would be much more convincing if Valve had an example of "on this hardware, at these settings, it can sustain smooth 60fps under opengl and not under directx".

> the difference between 270 and 315 FPS might translate into a difference between 50FPS and 60FPS

...or it might not. Which is the problem. You can't just interpolate like that.

Or can you? I have only heresay to go on.

It completely is indicative of actual performance. You could turn on vsync in either of these applications and it would lock down to 60 FPS. The point is here, that with the same game implementation you have some more performance headroom on Linux. You could potential improve the engine in some way and use up that headroom, which would bring the performance into line but with some sort of graphical quality improvement on Linux (OpenGL).

> You could potential improve the engine in some way and use up that headroom

That is a headroom of a factor of 5.25. If you use that headroom to implement better graphics, your performance characteristics will likely change completely.

That's the point. A thousand vs. a million FPS when you draw a single unshaded triangle doesn't necessarily interpolate to predict the performance in a real scenario.

I would agree that it may be better if they showed result for lowest FPS (instead of average) and on weaker system(were difference would matter far more). They probably tested on powerful developer workstation because it was the machine that they already program on.

Benchmarking is done with uncapped framerates to measure the relative performance of two systems. If you're a gamer, yes, you most likely won't see anything above 60 FPS, but the difference still matters if your hardware is 5x slower (54 FPS vs. 60 FPS)

Maybe someone smarter than me can tell me why they don't reboot OpenGL? Or better, leave the current run as is for the CAD folk, then run a new API, "OGL-Game".

Also, I got bored the other day... I can't help but wonder if Valve would push a desktop of their own devising?

Anyone know of any interesting hires that may have gone under the radar?

While I love the idea of free and open software, I have to admit an ignorance to it in application (well, at least Linux).

Valve creates a Steam client, and gets the Source game engine running awesomely on "Linux". How do all the different desktops factor into this?

PC games players are speed junkies. If this all goes through, then I would not be surprised to see games players migrate to linux in droves.

They're 'irritant avoiders', not 'speed junkies'. Once you get a framerate that's fluid, the drive to up speed even further is largely confined to a tight niche. Computers are generally fast enough now - I used to know a few overclockers back when it made a difference, now I don't know any.

>Computers are generally fast enough now

No, not really. There is still lots of room for improvement, especially if you want high resolution (retina or large display), 3D, multi-monitor configuration, and/or high levels of anti-aliasing. And providing stable 60FPS on 1080p can be difficult if you have mid-end or couple of generations old hardware.

And that's probably why we're not going to get a retina macbook air anytime soon. I don't think a GPU that is fast enough to give a good experience on retina-level resolutions could be put in a MBA without severe consequences related to heat.

I don't know, the current macbook air can drive 2 large external displays.

Not to mention input lag, and display lag.

I thought "fast enough now" explicitly excluded hardware a couple of generations old.

That was reference to the fact that current mid-end is fairly equivalent to couple generations old high-end.

If you read my comment again, I wasn't saying that overclockers no longer exist. They don't exist in the droves that they used to - computers are fast enough now for most gamers to have acceptably fluid FPS.

Those gamers that demand 1080p@60Hz under load are a niche, not the standard for 'pc gamers'. And for those who demand it, shelling out a few extra dollars for hardware that exists (previously it didn't) is much easier than messing around with hardware voltages and crossing your fingers that your gpu is one of the lucky ones.

And frankly, the idea that gamers will flock en masse to linux for a few FPS is utterly absurd.

A bit of a generalization there. For all of the players cranking away at Starcraft II, Quake III, and Counter Strike, you have plenty of "slow" players enjoying games like Skyrim, a gazillion indie games, etc.

Oh, and Farmville ;)

Usually the gaming community does not care about open source.

Gamers use whatever platform their favorite games are available on.

I really don't see this happening. They might give it a try, sure, but I'd bet the lion's share of them would be back on Windows in a week. Linux is just not as easy to use for most people, and when push comes to shove they'll go back where they feel safe knowing how things work, and knowing that things will work in the first place.

I know many people are diehard about Linux, but it's a big commitment to switch and it isn't easy. Even so, many users might make the switch to find Linux seems to introduce a lot of headache and perceived restrictions on capabilities that Windows doesn't. Sure, they could probably tweak their way past the difficulties, but after troubleshooting on a wiki or in forums for a couple of hours it would lose the flavor for the masses.

Given how ridiculously easy it is on most hardware to install a second ubuntu partition on a windows box these days, I am not sure that is really too much of an issue.

Also, the least hassle I have had with migrating people to linux has been with people who really have no idea about computers, possibly as they have no expectations of how things should be and don't fiddle with things as much.

And besides, there is no real switching. I run osx, windows7 and linux mint, mostly, (often on the same computer, although I have found triple boot on hackintoshes to be a bit temperamental). Using a new OS doesn't mean you no longer use the old one. It doesn't restrict you really at all, it just gives you more perspective and a wider range of expertise. I highly recommend it. Linux mint is not a bad place to start as it is compatible with ubuntu (and therefore debian, to a large extent), but currently has a bit more polish. - http://www.linuxmint.com/download.php

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact