- Soft shadows work in WebGL fairly well using PCSS: https://clara.io/player/v2/8f49e7c3-7c5e-43f0-a09c-33a55bb1b...
- Translucency can be faked effectively using a few different methods. https://clara.io/view/5c7d28c0-91d7-4432-a131-3e6fd657a042
- Screen space ambient occlusion, if using SAO, is amazing. SAO test: https://clara.io/view/2e1637a7-a41d-4832-923a-e6227d1ebaaa
- Screen space reflections also work. Our reflections are this: https://clara.io/player/v2/b55f695a-8f4a-4ab0-b575-88e3df8cd...
- Fast high quality depth of field: https://clara.io/player/v2/ce7d91ed-1163-4cbc-b842-929adc4ef...
- Real-time global illumination: https://www.siliconstudio.co.jp/middleware/enlighten/en/
So while I think that Raytracing is awesome, it generally will not increase existing real-time render quality that much. My experience with the game industry, even if you have a better way of doing things, if it takes more CPU/GPU cycles that a hack that achieves basically the same quality, it will not be adopted. It is that simple.
Physically Based Rendering has become the dominant approach in VFX because it's a huge simplification and productivity boost. Integrating tons of special purpose hacks into one renderer isn't just hard for the renderer devs, the control parameters exposed to artists become an absolute nightmare. Combing the PBR perspective with raytracing greatly simplifies both sides of this.
Not every game is trying to be the next Far Cry. Ray tracing will be adopted by teams that value that unification and simplification over getting the very last bit of performance possible by a pile of hacks. As the hardware improves, which still seems likely, we'll see the balance point of who makes that call shift in favor of tracing IMO.
It is sort of neutral on render time for the most part.
In time, I am sure we will be in a raytraced future if GPU trends continue.
It’s about the wholesale switch to a fully path-traced framework with a consistent mathematical foundation as opposed to the layered system of hacks and chained prepasses that prevailed in the early 2000’s
That's why I'm saying the combination of PBR and tracing is a huge productivity boon to both sides, art and tech. It has a performance cost, but if/when hardware can pay that cost in real time, games absolutely will use it.
If the ray tracing can be done at or around 1 ray per pixel or less using a network trained to do merge information over multiple frames and upscale we could probably get away with less. Maybe more if we can feed in a depth map, velocity map and a flat shader less rendering or other information to help guide the DNN.
Might even end up faster than current raster renders.
Also, many modern games have been hitting 60fps at 4k on mid-high tier cards for years - you can hit 60 at 4k with reasonable quality settings in a game like MGS5 or GTA5 on a single 970. If you get a 1080 way more stuff runs great, even at higher settings.
Many modern games have an internal resolution slider as well, so you can set the internal resolution to 80 or 90% and still get your UI rendered at full 4K resolution. If the game has temporal anti-aliasing the difference tends to be hard to spot.
Finally, if you've got a freesync or gsync monitor, a few dips down to 50fps at 4k aren't going to be super noticeable, and the games look great. :)
Which has been said since earl6 2000. But the question is, when? Or if it will ever come?
If you look at road maps we surely dont have that in next 5 years, even unlikely in 10.
PBR about having maps that are easy to understand through physical analogy, and simplifying the workflow so that it's mostly shared across render engines, applications and pipelines. It's about the ability to create texture in Mari or Substance, which use rasterizing engines, and seeing the final render in Arnold look nearly identical.
Not only that, but they compare everything with "1 sample per pixel" knowing that the SURE filter uses per pixel statistics of variance that require more than one sample per to start working.
That's not always the case. Physically based materials, tonemapping and many more were widely adopted in game engines because they're easier to get proper and predictable results with. It's the same with most rendering techniques based on real physics, they are usually easier to work with, which is a very real advantage which allows devs to make more realistic environments in less time.
It's the same process that led the non-realtime world to physically based renderers, although it is slower in GPUs because of obvious hardware constraints.
For a long time, games pushed the graphics envelope, but that doesn't seem the case any more. So perhaps that's less important today, and GPUs have overshot gamer needs.
OTOH, development costs of AAA games are incredibly high. with larger worlds at higher res. Even worse at 4K.
So the question of the magnitudes remains.
2018: the year of raytracing in games, finally?
A data-point is that Imagination has had hardware raytracing for several years now: https://www.imgtec.com/legacy-gpu-cores/ray-tracing/ But without publicised success.
-Thats not what an ao pass should look like. But you shouldnt even need ao if your GI/shadows look good
- Gi should have detail to it, every realtime example is sooooo blurry
-The screen space reflections sometimeslook ok but really depends on your scene, you get all sorts of reflections in the wrong place.
-Yucky DOF without the nice bokeh on highlights and all those edge problems you get with a z-space blur.
The beauty of raytracing with a unified sampler is it makes the algorithm for each of these features you listed incredibly simple and it mixes distributes CPU/GPU time to whats important depending on that part of the image.
scene with lots of motion blur- more primary samples, less samples to gi/shadows/reflections - automatically based on how much gi/shadows you see
scene with lots of gi - more samples in gi, less for reflections etc - automatically
You can have a complex scene with reflections/gi everywhere, and then turn a heavy DOF on and get faster frame times.
Techniques other than raytracing have an artistic place, not merely a pragmatic one.
These days they're pretty much 100% path-traced. But most animated films out there in the history of 3d animation were rasterized.
The challenge of all of this was doing it using rasterization techniques, or integrating more realistic techniques using path tracing into a standard rasterization pipeline in a way that didn't kill performance. But now render farms a big enough and technology has advanced far enough that they can just path-trace everything.
Nowadays ray tracing is common, but as recently as five years ago it was rare. Not all high-quality animation is ray traced.
Everybody in CG knows that raytracing is the graal in that it allows a unique, universal rendering model and that triangle-based techniques are always hacks aimed at approximating a raytracing result (or even better, a radiosity-based algorithm)
We have always surfed on the edge between cramming more millions of triangles in our graphic cards and being able to make a complex calculation per pixel. It is arguable that if hardware manufacturers had taken a different path, GPUs would be more comfortable with raytracing now as it requires a very different architecture (make lookups in a big scene cheap vs matrix operations)
From time to time, big player test the water with raytracing and often are not followed by the crowd of developers afraid to change their ways.
I wish that one day we cross that bridge.
Yeah, no. It still looks like "draw a blurry black halo around edges and call it a day" ambient occlusion.
DOOM (2016) is a good example of this: http://www.adriancourreges.com/blog/2016/09/09/doom-2016-gra...
SSAO has been overused/applied poorly (just like the brown 'bloom'/color grading effect in mid-2000s games), which is why people often dislike it.
Effectively whenever some new piece of hardware hit the market, the quality of games would be rolled back by a number of years as companies released what may well be glorified tech demos.
And as of late i wonder if the pace of GPU development have lead to perhaps the most prolonged period of such tech demos.
Honestly I'm not sure why gamedevs seem to care so much about this kind of thing (marketers on the other hand...). Minecraft was worth $2B and intentionally looked like it was rendered in mode 13h.
I dunno, I think gaming is starting to come out of that prolonged oooh-look-shiny-effects kind of era and into a pretty creative space.
Screen space reflections are an amazing trick and work really well for _some_ things, like reflective floors, water bodies, or objects the player might be holding near the camera (e.g. a weapon)
Since they can't reflect the back sides of objects, or anything outside the camera frustum (which includes the player model) they can't be used for things like mirrors.
The appearance of the reflections being 'clipped' with objects leaving the frustum as you move your head is also quite jarring on VR.
Maybe this misses the point of standardizing an API? Isn't there considerable value in having a simple framework that can achieve all those effects, and be shared and communicated easily with others? I'm sure a lot of studios would welcome faster dev times, fewer authoring tools, and fewer geometric limits for their rendering effects.
Also worth considering is that all those specific tricks are very limited, either in applicability or in quality or both. Screen space reflections only work on flat surfaces. PCSS still needs a hand-tuned shadow map. DOF has problems at image space contact points. Etc. Let's not oversell today's workarounds as so good they should be preserved forever.
> even if you have a better way of doing things, if it takes more CPU/GPU cycles that a hack that achieves basically the same quality, it will not be adopted. It is that simple.
That's true for any one specific game on any one specific platform. OTOH, that framing misses the trend: there has been consistent and very strong pressure towards higher fidelity, more realism, better physics, faster processors, higher resolution, and overall larger numbers of everything, ever since the first video game ever made. Ray tracing might not be adopted today, but one thing I will guarantee is that next year the CPU & GPU cycles used for rendering will exceed today, and it'll be even more the year after that. When I think about that, I feel like seeing ray tracing take over in games is inevitable.
historically, gfx apis went the opposite direction. fixed function pipeline was replaced by shaders. now we rely on the api basically to do rasterization and little else. i think in ray tracing you would want a similar distillation. so like yes, the ray cast operator might be wanted in the api, but i think computations like importance sampling from the conditional brdf are better left in shaders.
and beginners who just want to get triangles on the screen copy/paste a bunch of code they dont fully understand, like now :)
By "simple framework" I mean the ability to trace rays, rather than the API specifics. What I had in mind is that by being able to trace rays at all, you can very easily get all the effects on the GGP's feature list, with less effort, wider generality, and higher quality than having to code up the buffering tricks required for things like screen space reflections or percentage closer shadows.
Sadly, I doubt the copy-pasta problem is going away any time soon... I think the trend is that it's getting worse.
I'm pleased to see that the tech video includes spherical mirrors. As I understand it, all raytracing demos are obliged by universal law to include at least three reflective balls in any promotion of the technology.
One day, someone will figure out a game where these super-shiny ball bearings are a critical part of the gameplay, and at that point, raytracing will finally take off...
Raytracing may bring more roundness, but why not quadratic surfaces or some other fast enough to render method of real roundness in rasterizers? It may be expensive (although one surface would replace many polygons...), but it'd solve one of the last remaining uglinesses :)
Sadly tessellation largely just gets used to screw over AMD performance (for those sweet Nvidia kickbacks; if you're wondering why some games like Crysis 2 tessellate flat surfaces into an insane number of polygons, this is why).
That's simply not true. I started getting into (high end) 3D in the early 90's. I've probably used every NURB or higher order surface modeling tool under the sun since then.
In fact, there are many amazing modeling tools for working with NURBs or other higher order surfaces.
They just all don't even come remotely close to polygon modeling when precision is secondary and workflow/ease of use is paramount.
So the answer in the VFX world, for years, has been subdivision surfaces. They have almost all of the good of bi-cubic patch modeling and almost all of the good of polygon modeling. What's more, existing polygon modeling tools can easily be upgraded to subdivision surface modelers by merely enforcing 2-manifold topology and adding the ability to display an [approximation to] the limit surface in real time.
Some of the schemes have nice properties. For example, after one step of Catmull-Clark, the entire surface consists of quads. And when treating each local grid of quads as the control polyhedron of a cubic b-spline patch (not a NURB but an UNRB, a uniform, non rational b-spline patch), the surface of the patch is equal to the limit surface that would be obtained by the subdivision scheme.
That's not true if any of the vertices is extraordinary. In other words a vertex shared by more or less than 4 quads. The good news is that subdividing those quads each into 4 smaller ones will result in 3/4 of the surface area meeting the right criteria. Or the area hard to deal with becomes 1/4 the size.
The ultimate coolness of subdivision surfaces is that every vertex on a subdivided mesh is a leaner combination of the original vertices. The weights do not change during animation. The weights only change if the topology does. Another nice thing is that meshes at different LOD contain the vertices of the lower LOD mesh.
They really do everything with one exception - they can't exactly represent quadric surfaces which are so important in CAD tools.
I'd guess that this is because the fastest way to compute a precise silhouette of an arbitrary shape (like quadratic surfaces) is to do a ray test for every pixel that might hit the surface. You could project the surface onto the image plane to get boundary curves, but (a) determining the curves that make up that silhouette is a complicated geometric process that's prone to numerical error in weird edge cases (of which there are lots) and (b) once you have the projected curves, you still have to actually shade the interior, which require you to project back onto the original surface to figure out the normal, texture, etc. At that point, you're essentially doing little ray tests for each pixel anyway, so why not just raytrace the whole thing? The global illumination approximation is much much better with raytracing, and modern GPUs can do it real-time.
By the way, modelling everything as an analytic surface rather than as a discretized polygon mesh is called "Boundary Representation" or b-rep, and it's what's used in solid CAD programs that engineers use, like Solidworks or Inventor. Engineers and designers like analytic surfaces because they have "infinite" resolution, but even those CAD programs almost invariably polygonalize b-rep to a mesh and then render the mesh, because it's expensive to hit-test all the surfaces involved; computers only recently became able to do that fast enough for real time interaction, and most CAD programs are old old old.
I thought that was iso-geometric analysis? What is the difference?
Something like tessellation  can be used to dynamically tune how many subdivisions to give a model based on certain parameters, such as camera distance.
I think generally with many of the tricks that have come out of SIGGRAPH for the last decade, we've really reached a point where, computationally, raytracing really is starting to have the advantage over rasterization in terms of both pixel count and object count. The struggle for the next 30 years however is going to be overcoming the tooling built around rasterization, and to a lesser extent the headstart rasterization has had in terms of hardware.
Not to sound too crazy but I also have a strong suspicion that if you tied a physics engine to a raytracer, you might get your physics 'for free' which could be a big deal for pushing forward more realistic physics too.
Why is that?
but with this you could have all the particle effects work properly in the reflection.
Actually mirrors can be easily implemented as a pre-render step, you just re-render the system from the right perspective, a virtual camera, and then paste that on the polygons that are acting as the mirror.
Example in Three.JS: https://threejs.org/examples/?q=mirror#webgl_mirror
In the demo, there are actually lots of much more subtle reflective surfaces that contribute to the realism of the scene.
> .. critical part of the gameplay ..
Strictly speaking, graphics hasn't been a critical part of gameplay for a long time. I think we've mostly gone beyond the point where increasing graphics capabilities actually enable new gameplay.
Which is definitely not how raytracing is done if you want to get an image before the end of the world occurs. Rays are traced from the camera, and then back to the light sources. Some amount of pre-lighting can be done by tracing photons forwards from lights, but not the main image generation.
The demo is amazing on its own as a piece of art, let alone it's done programatically, let alone it's rendered in real time, let alone it's just 4k.
I've used Octane and I think what they are best at is hype.
And in their facebook group:
But I think we are still some time away from real time noiseless path tracing. Although more clever denoising filters for the first samples are getting much better.
And ofcourse you can decide to use a very low number of bounces. This will give a darker inacurate result but still can give a feel of global illumonation.
> Depending on the exact algorithm used, rays of light are projected either from each light source, or from each raster pixel; they bounce around the objects in the scnee[sic] until they strike (depending on direction) either the camera or a light source
I love open standards as much as the next guy but for something like this where the utility is still an open question, I'd rather than some just makes an implementation for just their technology. The alternative is that your open standard is full of half baked experiments and trimming a standard is a huge pain.
If DirectX were to be used on Linux and macOS, you'd have some real competition, but that is not the case.
And most new phone chipsets. And a lot of old phone chipsets. And a lot of ARM-based single board computers. And a lot of non-Xbox consoles like the Switch have hardware support for DirectX...
DirectX is in a lot of places you would not expect.
Do you have evidence it doesn't run at 60 fps?
Nouveau is still problematic because of the lack of access to the NVIDIA proprietary code, i.e. the extra bits like fan management, monitors, etc. That and it's performance still just isn't there against the proprietary drivers.
AMD drivers are pretty good and Wine performance has increased significantly over the last year or so.
Hopefully Vulkan encourages more widely supported game dev, it's a terrible thing that gamers are practically locked into the Windows OS by way of proprietary graphics stack and the old catch-22 of 'no market for Linux games/no games on Linux.'
The gamer culture is not the same as FOSS one, cool games, getting hold of IP and belonging to a specific tribe (e.g. PS owners) is more relevant than freedom of games.
Currently Vulkan only matters on Linux and flagship Android devices.
It remains to be seen if Microsoft will ever allow ICD drivers on the Store or what is the actual use of Vulkan vs NVN on the Switch.
Direct3D has had world-class debugging tools and a reference renderer for years, meanwhile when I'm trying to ship OpenGL games the shader compiler doesn't even work the same across multiple PCs. (Vulkan fixes this stuff, yay!)
It sucks that DX is proprietary but the proprietary nature of it means that it can achieve things that are only possible with full integration - just like Metal on iOS and OS X.
Apple saved OpenGL from being irrelevant thanks to NeXTSTEP and their OpenGL ES push on iOS, because they needed to attract devs, now with the commoditizationof middleware among AAA studios that is no longer a relevant.
And it remains to be seen if Vulkan will ever get any adoption beyond Linux and flagship Android phones.
2) DX12 works on win10 only, which deployed to waay less than 90% PCs
Mostly it has seemed like it's been everyone except the actual graphics programmers.
Limited uses? Sure. Otherwise hell no. It's never been more than a gimmick GPU vendors use to show off (in real-time, offline rendering is something else entirely).
Also, I doubt GPU vendors would add special-purpose hardware like that. They'd much rather just add some new shader instructions to help doing it.
The PowerVR guys added special purpose hardware for doing spatial lookups for raytracing.
Shadows are probably one of the places where raytracing makes a lot of sense though. It's one of few shadow rendering techniques that don't blow a ton of precious fillrate, and it shouldn't take too many rays.
Edit: Yeah, NVidia actually just announced a big project on that exact subject: https://www.youtube.com/watch?v=jkhBlmKtEAk
Obviously, it's hard to extrapolate from just a press release, but with Microsoft's big ML kick of the last few years, breakthroughs in ML steps in raytracing might seem like a reason for the press release.
If you don't move, the image "improves" over time. You can change the resolution in the top left corner. Play it in fullscreen. It works on phones, too.
The problem is at this point people want it at 1080, 4k or 8k.
Rasterization has improved a lot over the years, so the meaning of "raytracing" has to improve to be competitive.
If you want to see how much existing techniques suffer compared to tracing, just check out the absolutely miserable, incredibly ugly, just disgusting screen space reflections in the brand-new Crytek game Hunt: Showdown. The water is a nightmare.
A raytracing API also isn't forced into the pipeline for every rasterized polygon like old features - hardware T&L, geometry shaders, etc - were. It's something you use on-demand in a compute or fragment shader.
Also, ray tracing, marching, and casting has been real-time for a while, for some definition and degree. Graphics has always been about tradeoffs, though, and this seems markedly more general purpose than anything I’ve seen before.
I’m excited, even though I don’t have any Microsoft products outside my Xbox.
Anyway this seems a little overhyped
I haven't touched a PC game in ages, would be fun to come back and see some epic like HL3 done up in this.
"I am 90% sure that the eventual path to integration of ray tracing hardware into consumer devices will be as minor tweaks to the existing GPU microarchitectures." - John Carmack
I stumbled across this a year or so ago and was amazed at how the realistic image is built up after rotating.
Won't this resemble power demands of crytpocurrency mining rigs except actually used for gaming? As if it weren't already bad enough the amount of energy we use per capita...
Looks interesting anyway.
* I use Valve due to their apparent Linux push with SteamOS (what's the go with that btw!?)
Windows is such an anti-brand that they couldn't even get customers to buy a Windows phone after a $500 million advertising campaign.
Also former IGDA member and attendee of a few GDC conferences, game developers only care about shipping games and their IP.
AAA studios don't care 1 second about APIs to make a better world.
Adding a new rendering backend to a games engine is a trivial task, when compared to the pile of features a game engine needs to support.
Also most Windows developers don't care about Stack Overflow surveys.
You appear to be speaking on behalf of all/Windows developers. Perhaps not 'most' think that way? What is your evidence other than your stated status and related anecdotes?
Why should we care what AAA studios care about? Should we be happy they've continued to push DX as default?
What do you do when DX doesn't provide what is needed for a game? Too bad? Fight with MS?
Overall, shouldn't we want gamers to enjoy the games being developed as widely as possible on a range of platforms? It's a little lazy to just fall back on the old catch-22 of 'no market' because 'no games' because 'no market...'
It is easy find out how the industry actually thinks, and if I am just writing nonsense.
Go spend some time reading Gamasutra (all articles available online), Connections, MakingGames Magazine, or the free sessions and slides at GDC Vault.
If there is an university degree in game development nearby, attend their monthly meetings.
Then you can validate for yourself how is the games development culture and what is actually relevant.
You keep saying this, but it remains false. If it would have been so trivial, studios wouldn't have hard time adding such backends and wouldn't need to hire third party porting experts when they decide to do it. You can see how long it takes major engines like Unreal to make a fully functional backend (since features added to such engines are publicly communicated). It's very clear it's not trivial at all.
And MS and Co. obviously do all they can to keep this difficult, that's the main idea of their lock-in which they designed to tax developers with.
What's bad though, is your justification of this practice.
They added Wii, DirectX10, iOS and Android backends while I was there. None of these were ever considered risky and none had more than one person working on it. Each console/platform has it's own quirks in how to optimise the scene for rendering but the having something rendering on screen is pretty much trivial once you have the machinery in place.
I can't speak for Epic, they are making an engine for every possible game and every possible rendering scene which is a harder problem than what we were doing. But the rendering backend isn't the hard part.
The problem is not in the risk, but simply in the cost itself. It's an extra tax to pay. However quality can also suffer, see below.
> I can't speak for Epic, they are making an engine for every possible game and every possible rendering scene which is a harder problem than what we were doing. But the rendering backend isn't the hard part.
The story of Everspace illustrates my point. They were bitten by multiple issues in OpenGL backend of UE4, and it took Epic a long time to fix some of them. Their resources are limited, and they are more focused on more widespread backends obviously. Which is exactly the result lock-in proponents are trying to achieve.
Don't know exactly what issues Everspace had with the UE4, but you want to have a fun night go out with some Epic licensees and get them to tell you war stories of issues they have had when they tried to do something which Epic hadn't done in their games. You're paying Epic for the "battle testing" and often they didn't fight those battles.
Part of the reason I left the games industry is that once you work at studio with an internal engine it is extremely frustrating to work on AAA games without the freedom to walk over to the engine programmer and get them to move the engine closer to what you need.
Internal engines also on average are less cross platform. Simply because big publishers and shareholders don't want these very expenses that creep into development because of lock-in. That's why many Linux releases for such games use source or binary wrappers, rather than proper native rendering to begin with. This highlights my point above.
A port of a game is more than changing the low-level APIs used to control the hardware. It's the hardware of the platfrom the decides the complexity of producing the port.
Linux is a special case because it's the same hardware as a the Windows. Your market is people who want to play the game but aren't dual booting. Most of the issues with producing your port are going to come down to driver incompatibilities and the fact that every Linux system is set up a little bit differently (the reason Blizzard never released their native Linux WoW client). It's not a big market and there are loads of edge cases.
For big publishers and AAA development, they're not looking to break even or make a small profit. They need to see multiples of return on their money or they aren't going to do it. Using a shim is cheap and doesn't hurt sales enough to matter to them.
And I'm sure that cost plays a role when small market is evaluated. The higher is the cost, the less likely such publisher is to care, because prospects of profits are also reduced. So it goes back to my point. Lock-in proponents like MS and Co. benefit from lock-in in slowing down competition growth.
I think where we disagree is that I don't think of the lower level API as being much of a lock in. The better graphic programmers I know have pretty extensive experience of the various flavors of DirectX and OpenGL. The general principles are the same and good programmers move between them easily.
Lock-in here doesn't mean they have no technical means of implementing other graphics backends, it means that implementation is hard.
A lot of common middleware supports Linux just fine. It's graphics that's usually the biggest hurdle. People have expertise to address it, but it's still a tax to pay. And different distros support is a very minor thing in comparison.
If graphics is not the biggest issue, what is then in your opinion?
Graphics is the biggest issue, but the issue isn't at the API level. It's in the driver and hardware differences below that layer.
The "tax" as you call it, comes mostly from the hardware drivers leaking through the abstraction. Part of this is AAA game developers fault since they are attempting to use all the GPU with edge-case tricks to eke out more performance.
Make yourself an IGDA member, attend GDC, go to Independent Games Festival, network with people there and see how many would actually share your opinion.
So claim that it's trivial is fallacy. It's surely doable, but it's a substantial effort.
2 - Check how many in the industry actually care about your 3D API freedom goals. Even Carmack now rather likes DX, in spite of his earlier opinions.
3 - Every big fish and major indie studios are using Unreal, Unity, CryEngine, Xenko, Ogre3D, Cocos2d-X, or whatever else rocks their boat.
If you are happy playing D. Quixote, by all means keep doing it.
Game studios won't change their culture just because of some guys having HN and Reddit 3D API flamewars.
So it's an extra cost for developers who need to spend time on it, and it's exactly the cost MS and other lock-in freaks are benefiting from, since it's increasing the difficulty of releasing cross platform games (one more difficult thing to address). The higher is the difficulty, the more is the likelihood of some games remaining exclusives, which is exactly what lock-in freaks want.
And if you claim that this difficulty is offloaded from most game developers to third party engine developers, it's still a problem. Longer development periods, more bugs, harder support all that contributes to some not making cross platform releases as well.
There are no two ways about it, lock-in is evil, and your justification of it is very fishy (you must be working for one of the lock-in pushers).
Nowadays doing boring enterprise consulting, with focus on UI/UX.
Experience about reality, how people in the industry think, what those people actually consider as project costs.
Gamasutra articles are all available online. Try to find any postmosterm complaining about proprietary APIs on what went wrong section.
Experience, not demagogy.
I trust more the experience of those who actually work on games porting, and explain relevant difficulties they encounter. And no one says it's trivial. On the contrary, they say that rendering is the hardest part, and the most costly one to port. So it's very clear, that lock-in proponents who are against cross platform gaming (MS and Co) are benefiting from this hurdle and strengthen it by pushing their APIs.
Someone that would actually loose this source of income if you had what you wish for, thus be forced to search for other kind of consulting services in the games industry.
Again, not understanding the industry works, plain demagogy.
When a games developer sees a 3D API manual for the first time, their first thought is "What cool games can I achieve with it?" not "Is it portable?".
He can find what to do, working in engine development directly.
So far you were engaging in demagoguery about how trivial porting is and justification of lock-in, even when facts to the contrary were shown to you directly. I see no point in taking your word on it, against those who are actually known to be working in this field.
> When a games developer sees a 3D API manual for the first time, their first thought is "What cool games can I achieve with it?"
Until their publisher or shareholder knocks on their heads and stops their cross platform releases because of costs of using more APIs. Goal (of lock-in supporters) achieved.
Yeah, not sure how is this relevant. Developers are not your average CoD, LoL, WoW etc.. players.
I'm not trying to defend anything as I don't even use Windows myself, it's just that from a business point of view making sure DX and "gaming" in general is locked to Windows makes absolutely 100% sense as again - most gamers do not care if they have to use Windows or not, they just want to play games.
They're not choosing Windows because it does X or Y better or because it's a consciously preferred choice, they're choosing it because it's the option they get by default and because it keeps being reinforced through support of proprietary systems like DX.
Though only anecdotes, I sure do see a lot of comments around people willing to support/use Linux if not for lack of games/particular apps. As you state, gamers don't care what OS they use as long as it works. And well, Linux is free and works, it's just not well enough supported because of the chicken and egg of no market size vs no products available so no market size. I.e. if AAA games were developed with full Linux support, gamers wouldn't care if you gave them Linux as default instead, would they?
That's why it's foolish to assume Microsoft will change their ways somewhere down the line. The point is that it is us, the users that should be choosing alternatives and not celebrating vendor lock-in.
And needless to say that exactly with Satya Nadella they started Windows S where you can't use any API other than D3D.
They opened dotnet because they wanted to get it on server-side since Windows have very limited exposure on that market. But even today .NET still not multi-platforms in term of UI and for that reason a lot of Windows-specific .NET software wasn't ported. And it's already more than 3 years after release of dotnet core.