Some of this is ameliorated by the OpenGL ES standard, which throws away a big chunk of the redundancy. But I'm not yet convinced that OpenGL has gotten away from its philosophical roots as a performance-secondary CAD API, which continues to dog its efforts to grow to serve game development needs. The fact that it's functionally the only choice for 3D rendering on non-Windows platforms is more testament to the nature of the hardware-software ecosystem of graphics accelerators (and the creative doggedness of game developers) than the merits of the language.
1. Arcsynthesis's gltut (http://www.arcsynthesis.org/gltut/) is good and reasonably thorough. He explains things well but not always in the order you'd like him to. At the end you will probably know enough to be able to figure out the rest on your own as you need.
2. http://open.gl/ is good but somewhat short. It also goes in depth into creating/initializing a context with various APIS (SDL, SFML, GLFW, ...). More of a good starting point than a complete guide.
3. http://ogldev.atspace.co.uk/ has a lot of good tutorials on how to do more advanced techniques, as well as many beginner level tutorials. I've never gone through them so I can't speak to their quality, but I've heard good things.
4. http://www.lighthouse3d.com/tutorials/glsl-core-tutorial/ is also good, but focused on the shading language.
See /r/opengl and freenode's ##OpenGL channel for more. Both those places are fairly newbie friendly (/r/opengl moreso than ##OpenGL, but as long as you actually know your language of choice they're nice), so feel free to ask questions.
Here's some tutorials I wrote to hopefully make it clear how things work
Everything you learn there will be directly applicable to C OpenGL
In the meantime they can continue improving OpenGL, too, but without focusing too much on it, and just kind of let it deprecate itself, as OpenGL ES sits on billions of devices and developers develop only for it and not for the full OpenGL anymore.
OpenGL ES isn't trying to be a clean break. It's just a stripped down version of the given version suitable for embedded environments (e.g. OpenGL ES 2 is a stripped down version of OpenGL 2.0, ES3 => 3.0, ...).
The traditional OpenGL drivers have everything in a single near-monolithic bundle. Low-level memory management is tightly coupled with the high-level 3D primitives and the graphics driver implementation.
With OpenGL ES, the actual buffer and surface management has been split into EGL. GLES has the "useful" GL implementation.
My understanding is quite limited, as the only reason I know about this is that I have been involved in integrating Wayland into embedded systems.
They support OpenGL but aren't using it out of the box for their toolset.
See: libgcm, gnm, gnmx. Although I'm not positive that gnm/gnmx are opengl derivative.
Also see: http://develop.scee.net/files/presentations/gceurope2013/Par... (pdf warning)
To be clear: None of the APIs you listed are based on OpenGL. PSGL was an OpenGL implementation, but nobody writing a high-performance game used it because it was too slow and unreliable. The APIs used by almost all shipping games you can think of are substantially lower-level.
What's broken is that the abstraction between graphics card and data (on the screen) is too big
We don't have troublesome/fat drivers as these since the "Softmodem" days and even then (Wifi is also complicated)
It's too big of a gap.
In 2D graphics, you send graphical data and it is displayed. You may even write it directly to memory after some setup.
Audio, same thing. Network, it's bytes to the wire. Disk drive, "write these bytes to sector X" (yes, it's more complicated then that, still)
With 3D, we have two APIs that have an awful amount of work to do between the getting the data and displaying it.
I'll profess my ignorance in the low-level aspects, I only know "GlTriangle" , OpenGL 101 kind of stuff, and I have no idea how: 1 - this is sent to the videocard, 2 - how does it decide to turn that into what we see on the screen.
Compared to the other drivers this is a lot of work and a lot of possibilities of getting this wrong.
Adding GPGPU stuff makes it easier in one aspect and more complicated in other aspects. We don't have a generic way of producing equal results from equal inputs (not even the same programming environment is available)
We don't have OpenGL, we have "this OpenGL works on nVidia, this other one works on ATI, this one works on iOS, or sometimes it doesn't work anywhere even though it might be officially allowed"
Because the GPU is cutting-edge, there is a certain amount of magic voodoo required for top performance that needs to get abstracted away- maybe this particular model of GPU you have doesn't support some common instruction. You don't want to handle that in your software, you want to hide that in the driver.
Beyond that, the API is also there to make the GPU easier to use. OpenGL is a mess, sure, but to my understanding most developers would pull their hair out and give up if they had to program the GPU directly.
Not really. It's like quantum mechanics compared to classical physics.
For instance, "branches" don't work like you'd expect. On a cpu you execute one branch or the other. On a GPU, you get things like both branches execute, but then it just throws away the half that shouldn't have run, but that means you're bottlenecked by whichever branch takes the longest (Or something like that -- the details escape me but I do remember something about CUDA's branching doing weird things). Point being, GPU's are weird. It's nothing like programming a CPU at all.
It's not that weird. You don't really have thousands of parallell processors, but a single processor, operating on thousands of values. (Like SIMD on steroids.)
Since all operations must be done identically on all values, a "branch" is really doing both branches and recombining them with a mask of equally many booleans - as you say "throwing away" the unwanted branch.
The GPU doesn't get more complicated to support OpenGL; OpenGL gets more complicated to 1) meet the needs of developers and 2) support the GPU.
The real way to stop adding new features & ops every year is to stop caring about whether your GPU is fast.
P.S. The GPU is a computation engine that is more parallel than the CPU, but it is not analogous to a CPU with more threads. A GPU basically cannot branch (if/for/while) worth a damn, for example.
There are a few issues. One is that the GPU does much more than map/scatter/gather (reduce is hard in parallel so it doesn't do that), look into the stages of the graphics pipeline and see what I mean. The other big one is that it doesn't work like a CPU in a lot of ways and making it general purpose like one would loose enough of the performance that it would no longer be useful in many cases.
Really what you're asking for is a super-parallel general-purpose CPU, which really isn't what a GPU is or wants to be.
Every (well, hopefully) graphics engineer knows how the data goes over the wire into the graphics card, and understands the stages of the graphics pipeline (or at least the ones that matter for the version of OpenGL/D3D they're targetting).
Yes, the relative complexity of the graphics pipeline means that there's much more potential for errors from the driver, but I'm not sure it's productive to say 'this is too complex' when there isn't a simpler way to get the same results.
So, one of the roles of the API is a sort of a deserializer, converting our human, linear, serial way of thinking to the GPU's parallel world.
Yeah. I would love to see a display protocol where the pixels themselves are exposed as a framebuffer I can write to. No refresh rates, no scanning; I want random access to the pixels on the screen limited only by the available bandwidth.
It's slow, but the CPU can never render as fast as the GPU.
Even before 3D cards for PCs, and even when CPU speeds got fast enough to make games like Doom, you still saw the development of specialized hardware (such as the VESA Local Bus) to support better video and graphics capabilities.
That thing is not really fast. MSX's main CPU, an 8-bit, Z-80 running at less than 4MHz, could write to the framebuffer MUCH faster than that thing, were it not in the way of the framebuffer.
It did provide some interesting capabilities beyond that, such as sprites and custom 'glyphs', which simplified a lot of the programs at the time.
However, it did (and does, there's an active community) hamper efforts to improve the performance.
I'd like to see a Z80 blitting to a framebuffer faster than a VDP could chew through some tiles and sprites.
There general design of the famous TI VDP was fine; pretty much every 2D game console released after the Colecovision was either directly inspired by, or featured a direct successor to that chip, so clearly there was value in that combination of tilemaps (the "custom glyphs" you mentioned), sprite multiplexing, and separate video memory. You never really needed a framebuffer or direct video memory access on those systems. The TI VDP was just barely too limited in ways that exacerbated the flaws in the design.
The most obvious: No hardware scrolling. The NES could do per-pixel scrolling between two screens full of tiles, either arranged side-by-side or top-to-bottom. When the screen "wrapped around" the other edge, you only needed to load 1 row or column of new tile indices into place at a time, which comes to about 16-20 bytes every few frames. That's barely anything, and so NES games do just fine poking into VRAM through special registers one byte at a time.
On the TI VDP, which lacks this, you're obviously going to have big problems trying to implement smooth scrolling. More importantly, even for choppy 1 tile scrolling, you have to move the entire game area of tiles over at once. For a full screen of tiles (256/8 * 192/8) = 768 tiles, which when you add the color attribute map to the equation comes close to a kilobyte. I haven't programmed for the MSX, but that's probably too much to transfer in one vblank.
Ditto with the sprites, of which you only get 4 per line, and they're monochrome. If you want multi-colored sprites or more objects per line, you either have flickering or move everything over to framebuffer graphics and deal with that headache.
So I guess the short-sighted solution to this problem would be to improve the speed of CPU access to VRAM, and the better approach taken by Nintendo, Sega, etc. is to improve the capabilities of the VDP until it doesn't really matter how fast you can access VRAM. If the VDP is powerful enough that this isn't a burden, the advantage here is that the VDP has a lot more bandwidth available than it would if it were sharing a bus with the CPU. The same principle applied to the TI VDP-based systems, it was just a lot easier to run into their limitations.
But this is just the shortfall failure to be standards compliant. We have multiple ARM implementations in CPU space, we have two vendors x86 CPUs, we have all kinds of wireless cards, but assembly on one ARM core better run on another, your x86 binary better run on ARM or Intel chips unless you use vendor specific extensions, and your wireless radio better process inbound data properly regardless of the sender.
I think the real problem is that in the massively parallel pipeline architecture space (aka, gpus, but they are more than just graphics processors anymore) is that they aren't treated like proper programmable computers. What we should have are compilers to build ASM binaries for each architecture, and sane ISAs.
That would mean in the present tense you would need Intel: preSB, SB, IVB, Haswell, Broadwell, Skylake, etc compilers. I've never read their ISA documentation, so I don't know which of these have common core so that you could treat it like a CPU - ie, base instruction set with extensions like SSE or NEON. I do imagine though if we weren't working at such an absurdly high level, we would see graphics hardware conform to the same model as CPUs - common ISA, with multiple implementations.
I guess the problem there is that the way all modern ISAs are handled is ridiculous and stupid. Intel "owns" x86, ARM "owns" itself, etc. The reason we don't have reasonable CPU competition is that you don't just compete in hardware implementation, you either need to pay extortion to use what is effectively the same API - the same ISA - and that is one of the strengths of the graphics industry. The effective ISA that software is built against is predominantly open (most GPU code is openGL, despite how many directX games there are). We can see the difference - very little hardware runs DirectX because they have to pay MS the privilege to support it, and then it isn't even a standard at all so you can't use it on anything but Windows. It is congruent with proprietary ISAs for cpus.
An example of a non-proprietary ISA is SPARC. I've never really looked into how good an ISA it actually is, but it is royalty free unless you want to use the branding - you can implement your own SPARC cpus without paying jack, the way it should be.
So like I said, what we really need is a common open ISA for hardware graphics accelerators to implement, and then independently develop extensions for that can be proposed and adopted into the mainstream standard. And if that standard ever becomes overly bloated, any vendor can develop their own newer base to handle newer paradigms or fix bloat, the same way we see programming languages go.
If I didn't have to figure out how to eat or didn't have another dozen things I'd want to do, I'd definitely want to see what I could manage writing a binary compiler for the SI ISA from AMD and seeing how performant you could make some other high level graphics language with compiled binaries. That is kind of what they did with Mantle, but their continued showing of not making any effort to open that language up to standardization or even just publishing it at all shows that it certainly won't be the answer.
Except for the word "ISA", this sounds an awful lot like OpenGL, OpenCL, & Direct3D.
Which makes sense. You don't run a software binary on a GPU, so why does it need its own standard ISA? A GPU crunches data. The CPU feeds the GPU data. So we use APIs. You could make the GPU more like a CPU at the cost of graphics performance. Or you could integrate a CPU to the GPU, whose job is to run software and keep the GPU fed. But that sounds an awful lot like an APU.
Basically what I'm saying is a GPU is more like a network adapter- it is just fed data- and network adapters don't have their own ISA either! They do have binary blobs, but that is firmware.
Sure, building an IR from scratch is fun. But making it truly cross-platform and ready for many usages is really hard. Also, the GLSL source is an IR between the programmer's intent and the driver's behaviour. Code is just another type of binary. It is just slightly harder to parse, but not by much; without performance comparisons, a complaint about how hard it is to parse code is invalid.
Feeding the driver GLSL can also yield much clearer error messages for programmers. I can only imagine what kinds of error messages the IR compiler would produce. Sure, hopefully, our cross-platform IR would be accepted by all GPUs without pain, but that's improbable.
Regardless, starting from a clean slate is much harder than working our way from the current state to an improved OpenGL. Just like few browsers are on board with NaCl, few GPU makers would be on board with a brand new design.
It's not that hard, as long it remains as close as possible to the source language (ie. GLSL). Iow, as the OP is advocating for, an AST of the shaders. This removes the cost of parsing the source code (but requires on the other hand to validate the AST, so this isn't exactly a complete gain, but definitely an progress compilation wise). However, I suppose that what motivated the choice of using GLSL source directly is the simplicity of the approach: no need to build the GLSL scripts separately. When working with interactive tools, it's a non negligible comfort, imho. Another interesting aspect is the ability to build the scripts dynamically, like people do with SQL. I wonder if this approach is used by professional game studios.
#Preamble: Except on Windows you cannot run Direct3D anywhere else. Unless you plan not to publish on Android, iOS, OSX, Linux, Steambox, PS4 etc. you will have to target OpenGL, no matter how much you dislike it.
#1: Yes the lowest common denominator issue is annoying. However, in some cases you can make use of varying features by implementing different renderpaths, and in other cases it doesn't matter much. But factually wrong is that there would be something like a "restricted subset of GL4". Such a thing does not exist. You either have GL4 core with all its features, or you don't. Perhaps author means that GL4 isn't available everywhere, and they have to fall back to GL3?
#2: Yes driver quality for OpenGL is bad. It is getting better though, and I'd suggest rather than complaining about OpenGL, how about you complain about Microsoft, Dell, HP, Nvidia, AMD etc.?
#compiler in the driver: Factually this conclusion is completely backwards. First of all the syntactic compile overhead isn't what makes compilation slow necessairly. GCC can compile dozens of megabytes of C source code in a very short time (<10ms). Drivers may not implement their lexers etc. quite well, but that's not the failing of the specification. Secondly, Direct3D is also moving away from its intermediary bytecode compile target, and is favoring delivery of HLSL source code more.
#Threading: As author mentions himself, DX11 didn't manage to solve this issue. In fact, the issue isn't with OpenGL at all. It's in the nature of GPUs and how drivers talk to them. Again author seems to be railing against the wrong machine.
#Sampler state: Again factually wrong information. This extension http://www.opengl.org/registry/specs/ARB/sampler_objects.txt allows to decouple texture state from sampler state. This has been elevated to core functionality in GL4. The unit issue has not been resolved however, but nvidia did propose a DSA extension, which so far wasn't taken up by any other vendor. Suffice to say, most hardware does not support DSA, and underneath, it's all texture units, even in Direct3D, so railing against texture units is a complete red herring.
#Many ways to do the same thing: Again many factual errors. Most of the "many ways" that author is railing against are legacy functions, that are not available in core profile. It's considered extremely bad taste to run a compatibility (to earlier versions) profile and mix&mash various strata of APIs together. That'd be a bit like using Direct3D 8 and 11 functionality in the same program. Author seems to basically fail in setting up his GL context cleanly, or doesn't even know what "core profile" means.
#Remainder: Lots of handwaving about various vaguely defined things and objecting to condjmp in the driver, again, author seems to be railing against the wrong machine.
Conclusion: Around 90% of the article is garbage. But sure, OpenGL isn't perfect, and it's got its warts, like everything, and it should be improved. But how about you get the facts right next time?
If key parts of the API that you need live in "the extensions," then the API itself is not a properly-tuned abstraction to serve your needs well.
Hell, the pre-Sandy Bridge Intel hardware only supports 2.1. It is only in the last 5 years that both mainstream GPU manufacturers supported 4 at all too, I have a GTX 285 that can't do tessellation either.
The standard addressed the concern and now its up to implementors to do it.
"OpenGL is broken" refers to the market adoption of the standard, because when you're developing graphics software for consumers that's the aspect you care about.
(This isn't the only point the OP is arguing about, but anyway.)
What exactly would be the alternative? It's either there's a standard, and adherents must follow its core features to get the compliancy stamp, or there are no standards, and each go its merry way, up to third parties to follow up on all the completely different API resulting from that. As someone else said, there are core levels in the standard, which give garantees to third parties. It hasn't always been like that, but now we do have them.
As for support of the latest features on my old Geforce 7600, I guess I should accept the fact that they cannot be implemented efficiently, and if I want to play the latest installment of Wolfenstein, I'll have to grab a new card. Or I could try getting a more modest game. There is clearly a commercial aspect to this whole upgrade mechanism too, but since upgrades are necessary for technical reasons, it's difficult to argue against the mercantile part.
In my experience, OpenGL walks the middle line. There is certainly a core set of functions that (almost always, discounting buggy drivers) work. But the core set doesn't span all the critical functions you need for what modern game players would consider a performant game engine (such as hardware synchronization to eliminate "tearing"). So game engines will need to factor in the GL extensions, which put us in the "up to third parties to follow up" world. It's a frustrating environment to work in; you can't really ever trust that your code will either succeed or fail on any given hardware configuration, and you're stuck playing whack-a-mole on bug reports that you lack the hardware to reproduce.
> As for support of the latest features on my old Geforce 7600, I guess I should accept the fact that they cannot be implemented efficiently
I wish it were that simple. That, I could deal with.
I've worked with a card that had a bug in the GLSL compiler. A particular sum simply wasn't compiled, and we had to work around the problem by multiplying by 1.000000000000000001 to force the compiler to generate the bytecode for the whole calculation (the fact this trick works is something one of our veteran engineers "just knew would work," so we got lucky). There is functionally no chance of that software bug ever getting patched; card vendors don't care about older versions of their technology, and even if a driver version were out there that patched the bug, you can't trust machine owners to keep their drivers up-to-date.
More frustratingly, as I mentioned elsewhere, I've worked with cards that implement things that should be high-performance (like stages of the shader pipeline) in software, just to claim they have the capability. Since OpenGL gives you no way in the API to query whether a feature is implemented in a reasonably-performant way, you either do some clever tricks to suss this stuff out (F.E.A.R. has a test mode where it runs a camera through a scene in the game and quietly tunes graphics pipeline features based upon actual framerate) or gather bug reports, blacklist certain card configurations, and keep going.
Old cards not being as powerful as good cards I can deal with; if we could simply say "Your card must be X or better to play," we'd be fine. New cards with bugs and under-performant cards that lie about their performance to clear some market hurdles are the maddening corners of the ecosystem.
Well, is that an OpenGL issue? Wouldn't you get that very same problem with D3D?
> reasonably-performant way
I don't understand. How can you objectively define such thing? Doesn't it depend on the workload? If you're pushing 3 tris per frame, any feature can be labeled as reasonably performing, but if you have 300M, can any card these days maintain reasonable framerate even on the most basic settings? I am exagerating on purpose; some apps will require a very small amount of work on each stage of rendering, and could reasonably afford any extra pass, even if software implemented. And in other cases (which might be the majority), it doesn't cut work. I don't see how there could be an objective way of deciding if a feature is sufficiently performant. Your example is telling: an application as complex as F.E.A.R. Should clearly make its benchmarks (or keep a database) to decide which feature can be included without hurting performances. And even then, players have also different perceptions of what constitutes playability.
I agree with you: multiple standards, multiple vendors, multiple products, the fallout is "struggling compatibilities" at worst, "varying performances" at best. But that's a common point between D3D and OpenGL, not a divergence. Am I missing something?
Isn't it interesting that not more developers are pushing for open source drivers? Try finding a GLSL compiler bug in mesa and asking in #dri-devel on freenode or bugs.freedesktop.org. There you most likely get a very quick and helpful reply.
... which is graphic-developer-ese for "I hope you like branching your render engine some more."
OpenGL extensions are typically optional parts of the spec, available on a per-implementation basis, and are not pluggable: you're at the mercy of the vendors, assuming your target audience even has up-to-date drivers at all.
Compare the following:
Java class works on JVM on Windows, Linux, OS X...
OpenGL implementation for Intel HDs on Windows
OpenGL for Radeons on Windows
OpenGL for Geforces on Windows
OpenGL for Intel HD on OS X
OpenGL for Radeon on OS X
OpenGL for Geforce on OS X
I suspect the point about this subset-of-GL4 thing is that what you can rely on in practice is OpenGL3, plus enough extensions to bring it up to something closer to OpenGL4. Take a look at the Unity3D hardware stats (http://stats.unity3d.com/index.html) or the Steam hardware stats (http://store.steampowered.com/hwsurvey/) - ~50% of the Unity market and ~25% of the Steam market is pre-DX11, which I believe limits it to OpenGL3 at most.
I might agree that the the author of this piece is a bit careless about distinguishing between OpenGL as it is implemented and OpenGL as it is specified. Aside from the bind points nonsense, and the massive pile of random 16-bit values that is GLenum, the OpenGL spec doesn't actually require everything to be this total clusterfuck. I'm not aware of any specified reason why the GLSL compiler couldn't just as easily be a shared component (as it is in Direct3D), for example, and I don't recall the bit where the spec insists on shitty drivers. Still, we have to work with what's there, not with what should be there, and when what's there is a bit of a mess, it's absolutely fair to call it what it is.
Well, it could be, if the specification of the language requires a lot of work in the lexer, but that's probably not the case.
That's what made it a bad decision to compile the entire shader in the driver. They are incompetent, and the ARB picked a design that exacerbated their incompetence instead of working around it. Users don't care why their OpenGL implementation has bugs, they just care that it does.
These companies are businesses that need a business reason to support your platform. Until more people are playing triple A games on platforms that use OpenGL you can't really fault them for spending money when it doesn't make sense. Apple designs its own chips for its mobile device so I'd think the OpenGL on iOS would have better driver support.
Apple licenses Imagination Technology's PowerVR GPUs, they do not design them themselves. The OpenGL driver also comes from IT.
I also have explicitly had my statement confirmed by game & driver developers for desktop OS X; I can't say with 100% certainty that Apple owns the iOS stack, but I am relatively certain that it is true given that it is 100% known to be true on OS X.
Common-sense wise, I find it highly unlikely that iOS would be bolted directly to the powerVR graphics stack. It would make it too hard for them to move to another graphics vendor. So I think it is almost a certainty that they own some, if not all of the graphics stack, even if it is derived from proprietary PowerVR code & hardware.
But that's not my issue, I acknowledge freely that OpenGL drivers are bad. I just don't quite see how that's a failing of OpenGL, rather than the vendors who actually implement the drivers.
Also uses Clang and a bunch of Unixy open source stuff...
The core OpenGL feature set and API factoring are almost certainly things you can expect to be similar on console platforms, at least where the hardware matches. So in that sense 'It's OpenGL' is almost true!
I've had to deal with cards that explicitly lie to the software about the capabilities by specifying they support a shader feature that's implemented in software without acceleration (!!!). There's no way to tell via the software that the driver is emulating the feature besides enabling it and noticing your engine now performs in the seconds-per-frame range. So we blacklist the card from that feature set and move on, because that's what you do when you're a game engine developer.
All platforms support OpenGL, not sure where you're getting at.
Like it or not, neither of those are true. It might be nice if they were.
OpenGL is obviously still a widely-supported choice on desktop PCs (and mobiles), so that's not in question.
Yeah? So? If the graphics API which provides reasonable guarantees of feature availability and performance exists only on one platform, you code for that platform if you need those guarantees. There is a reason why until recently virtually all PC games, and a great many other graphics applications, were Windows only.
Yes driver quality for OpenGL is bad. It is getting better though, and I'd suggest rather than complaining about OpenGL, how about you complain about Microsoft, Dell, HP, Nvidia, AMD etc.?
Khronos Group failing to provide rigorous conformance tests and driver quality standards IS A FAILURE OF OPENGL. That's why Microsoft provided extensive support for OEMs and driver developers from the get-go with Direct3D: they wanted their API to be used.
If you care about graphical performance, then OpenGL is simply not up to the task. You should be using Direct3D, period.
I'm always confused when people say stuff like that.
1. Random forum user says "OpenGL is not up to the task".
2. AAA game developers says "That the Linux version runs faster than the Windows version (270.6) seems a little counter-intuitive, given the greater amount of time we have spent on the Windows version. However, it does speak to the underlying efficiency of the kernel and OpenGL." http://blogs.valvesoftware.com/linux/faster-zombies/
Who do I believe?
Valve was testing a game that was already years old when the tests were run, based on an engine that was written for DX9, and presumably only the happy case in terms of hardware and driver support (meaning an NVIDIA card and proprietary driver). If you're developing a AAA game for release next Christmas, you're going to want it to look better than the state-of-the-art from five or even three years ago. If you attempt that with OpenGL, you are going to run into holes in driver support for various GPU features, not to mention discrepancies in which rev of OpenGL is actually supported by the platform, which you will then have to work around with various vendor-specific extensions, which means more code paths and more things to debug. And then once all those holes have been painstakingly plugged, you can get to work on the performance hiccups...
Or you could use Direct3D, where everything Just Works on the same code path.
You can bet your ass that when HL3 is released as foreordained by the prophecies, it will be a Direct3D-only title at first. And remain so for at least a year.
I'm not excusing OpenGL for this fact, I'm just stating that if people cared about the quality of OpenGL drivers and made purchasing decisions based on that, then you bet your ass the manufacturers would make the OpenGL drivers better.
As for OpenGL's issues - that's what happens when a spec gets old enough. But the fact remains, it's the only graphics API that could be called 'universal', they have modernized the spec, and despite all its failings, somehow it still delivers better performance than DirectX...
> While the current GL spec is at feature parity with DX11 (even slightly ahead), the lowest common denominator implementation is not, and this is the thing that I as a developer care about.
Isn't DX restricted to Windows, meaning its lowest common denominator implementation is nothing at all?
The problem is that many OpenGL drivers implement the base OpenGL specs and then a couple of extensions here and there. Because of this you can't rely on what's available and you end up with something that's more akin to a mix of many previous versions of DirectX: some advanced capabilities but many basic ones missing.
The fact that DX is hugely popular as a backend for Win/Xbox even for games/engines that also target non-DX platforms is pretty compelling evidence of its popularity among developers.
Mobile on the other hand seems decent - OpenGL ES 2+ seems to be a well-designed clean and relatively minimal API with widespread support.
No longer bounded by OpenGL.
Back when I worked in Windows/console games, my market demanded D3D (+ sony/nintendo's wacky custom APIs). Now I work in mobile and my customers demand GLES. GL advocates used to cry "GL has the better tech! D3D only wins because of politics! Boo!" It will be quite a turn if they switch tunes to "D3D has better tech, but GL still wins because of politics! Yay!"
DX12 API Preview vid http://channel9.msdn.com/Events/Build/2014/3-564
DX12 API Preview slides http://view.officeapps.live.com/op/view.aspx?src=http%3a%2f%...
What are you actually comparing? Yet to be released DirectX12 v.s. OpenGL 4.4? Or v.s. what will actually "compete" against it, once it is actually released and used, OpenGL 4.5 or 5.0? Are you just assuming future versions will be worse than Direct3D? "used to cry"? That sounds all very fair and objective.
And even then, this attitude AMD has had for the last two years since announcing it certainly dooms it to a death of irrelevance. If they were serious about it being an open API, they would have opened up proposals for modifications and considerations from other vendors before having it deployed in games.
They could still do a heel face turn of course, but I would never expect it. OpenGL5 is more likely the answer, but that requires Khronos to accept that even the Core Profile shift in 3.2 was not enough, they need to rewrite the entire shader language and standardize a modern pipeline without all this redundant backwards oldschool bullcrap that makes the implementations so slow due to state nonsense.
That kind of defeats the purpose then. If AMD wanted any kind of chances for adoption they had to open it from the start.
> But that requires Khronos to accept that even the Core Profile shift in 3.2 was not enough, they need to rewrite the entire shader language and standardize a modern pipeline without all this redundant backwards oldschool bullcrap that makes the implementations so slow due to state nonsense.
How exactly is Khronos managed? Is there someone with decisive power, or it's simply by consensus of all participants? For example can Valve or any other interested party come tomorrow with a serious overhaul of OpenGL and expect that such changes will be accepted?
How does he figure OpenGL mandates this? OpenGL allows a (caching) GLSL compiler to be part of the OS OpenGL support, leaving drivers to consume bytecode or some other kind of IR.
!cache in DDG will get that for you in future.
Which means I envy you, in short. ;)