I'm sad this wasn't investigated further. Was one of the implementations not standards compliant?
In practice, some alternative implementations of these instructions treat the underspecification as license to do just enough to clear the threshold of being technically compliant with the standard and no more. This can show up as material differences in output in the least significant bits in some cases.
For floating point intensive algorithms, these small discrepancies and edge cases will occasionally bubble to the surface as material differences in high-level code behavior. My introduction to this class of bug many years ago was an application that was counting things for tax purposes based on geospatial relationships. On AMD, it produced a count that was off by one.
So far it looks like a CPU bug which produces NaN on some input.
The problem is most implementations include a few extra instructions or modes that are not defined in the IEEE spec. The Fast Reciprocal Estimate instruction that caused the issue in this example is one such instruction.
It's not defined in the IEEE spec. It's only defined in the x86 spec (and corresponding similar instructions are defined for other architectures). As its name implies, it's not defined to give an exact result. It's designed to be fast and it's designed to be a rough estimate to some level of precision.
You should not use such instructions if you care about any kind of accuracy or cross platform reproducibly. Stick to the instructions that are actually defined in the IEEE 754 spec. And stick to one of the recommended rounding modes, make sure all platforms use the same rounding mode.
This seems like an overly strong statement, as one reading of it is that "Fast Reciprocal Estimate" is just a bad rng function. I guess you mean that it's up to the user to provide their own error estimation?
Over the complete range of floats, AMD is more precise on average, 0.000078 versus 0.000095 relative error. However, Intel has 0.000300 maximum relative error, AMD 0.000315.
Both are well within the spec. The documentation says “maximum relative error for this approximation is less than 1.5*2^-12”, in human language that would be 3.6621E-4.
Source code that compares them by creating 16GB binary files with the complete range of floats: https://gist.github.com/Const-me/a6d36f70a3a77de00c61cf4f6c1...
I’m not sure about that. I think it might be. Despite both are within specs, the results are numerically different.
Modern DirectXMath doesn‘t use these approximated instruction for inverting matrices, it’s open source, there: https://github.com/microsoft/DirectXMath/blob/83634c742a85d1...
True, but it's quite common that floating point ops throwaway bits (i.e. beyond the epsilon) will be numerically different between vendors.
> Modern DirectXMath doesn‘t use these approximated instruction for inverting matrices, it’s open source, there: https://github.com/microsoft/DirectXMath/blob/83634c742a85d1....
Good find, neither does Wine, actually: https://doxygen.reactos.org/dc/dd8/d3dx9math_8h.html#a3870c6...
Would be interesting to see if this bug also happens on Linux.
Divps instruction (32-bit float divide, the precise version) was relatively slow back then, e.g. on AMD K8 it had 18-30 cycles both latency and throughput, 21 cycles on AMD Jaguar, Core 2 Duo did in 6-18 cycles.
Fortunately, they fixed the CPUs. Skylake has 11 cycles latency of that instruction, Ryzen 10 cycles.
It’s similar for square root. Modern CPUs compute non-approximated 32-bit sqrt(x) in 9-12 cycles (Ryzen being the fastest at 9-10 but Skylake ain’t bad either at 12 cycles), old CPUs were spending like 20-40 cycles on that.
I think these implementation-specific shenanigans only applied to legacy x87 instructions which used 80-bit registers. SSE and AVX instructions operate on 32 or 64-bit floats. The math there, including rounding behavior, is well specified in these IEEE standards.
I’ve made a small test app: https://github.com/Const-me/SimdPrecisionTest/blob/master/ma...
It abuses AES instructions to generate long pseudo-random sequence of bits, then re-interprets these bits as floats, does some FP math on these numbers, and saves the output.
I’ve tested addition, multiplication, FMA, and float to integer conversions. All 4 output files, 1 GB / each, are bitwise equal between AMD desktop and Intel laptop.
Nice work! I'd be extremely curious to see if this still holds on intrinsics like `_mm512_exp2a23_round_ps` or `_mm512_rsqrt14_ps` (I'd wager it probably won't).
This is a common sentiment, and it is perhaps a helpful way to look at things if you don't want to dig into the details (even if it's not quite right).
But it's worth understanding the magnitude of errors. rcpps will be 3-4 orders of magnitude "more wrong" compared to the typical operation (if you view the "epsilon" of most operations to be the error after rounding). Or said another way: it would take the cumulative error from many thousands of adds and multiplies to produce the same error as one rcpps operation.
On your graph, what’s red and what’s blue and what is on x axis?
X axis is source value, I've extracted a small subset of the data to graph in Excel, there: https://gist.github.com/Const-me/a6d36f70a3a77de00c61cf4f6c1...
Also, I've just recalled Y is absolute error not relative; that's why it follows shape of 1/x.
Thanks. If you still have the data (or if you can regenerate it) it would actually be possible to make a few small graphs that would cover the whole set:
Basically, the idea is to have x points as much as there are different exponents, which in 32-bit floats is at most 256. Then on the y axis one draws the number of bits of maximum distance between the really correct value of the mantissa and the calculated mantissa in the whole interval of one exponent. Those would allow comparing the implementations of Intel and AMD separately, and these graphs are what I'd be very interested to see. So the idea is to find the maximums in an interval, and there are then a limited number of the intervals. Only if such graphs match between AMD and Intel it would be interesting to compare inside of the intervals, but differences on that level would be these that I'd expect would be the obvious ones making the most problems, where the results like in the article (full black instead of shadow) wouldn't be surprising.
For that don't have to juggle with huge files, it's only 256 values per each CPU in that pass that are to be compared.
The results are not guaranteed to have same exponent, e.g. 1 / 0.499999 can be either 1.999999 or 2.0000001, both are correct within the precision, but have different exponents.
1) Use the binary representation of the numbers! To do so, cast the resulting float to the unsigned integer, then use bit masks and shifts to extract the exponent and mantissa. Note that the leading 1 is not explicit but implicit in the IEEE format unless it's a denormal number (so make it always explicit during the extraction).
2) use the exponent of the correct result as the "interval" reference.
If the exponents are the same, do a subtraction of the smaller from the bigger mantissa, that's the absolute "distance" between the two numbers -- the goal is to find what is the biggest absolute distance in which interval.
3) If one of the 2 values that are compared has different exponent, they can be converted to the same by a bit shift. Shift the mantissa of the one with the bigger exponent left accordingly. Again do the subtraction and use the result as the absolute distance. The goal is to figure out the biggest absolute distance in each interval (maintaining a maximum for each interval).
In short, think binary, not decimal, and measure using these values. Binary are only values that matter, decimal representation doesn't necessarily represent the exact values of bits.
float 1.0 == unsigned 0x3f800000 here exponent is 127 == 2^0 and mantissa 0 with implicit 1 at the start i.e. explicit: 0x800000
float 0.999999940395355224609375 = unsigned 0x3f7fffff here exponent is 126 == 2^-1 and mantissa explicit: 0xffffff
The absolute distance between these two numbers is 1 (adding one to the lowest bit of mantissa of the smaller number would result the higher number 0xffffff + 1 = 0x1000000, the later is the mantissa adjusted to the same exponent of the smaller number (0x800000 << 1) ). If the "correct" number was 0x3f800000 and even if the shift was needed to calculate the absolute distance, the interval is still 0 (i.e. 0 is the x axis value, as its exponent was 2^0 i.e. 127, and the value to be plotted is on y is 1 until a bigger distance occurs).
For more examples of the format you can play here:
Also note that a few exponents are special, meaning infinity or NaN. Whenever the "correct" answer is not a NaN or infinity but the "incorrect" is, that should be treated specially, if it actually happens.
Total, exact, less, greater columns have total count of floats in a bucket. Sum of the “Total” gives 2^32, the total count of unique floats.
Computing max bit that’s different is too slow for the use case, neither SSE nor AVX have vector version of BSR instruction. Instead, I’m re-interpreting floats as integers and computing difference of the integers. maxLess, maxGreater, and maxAbs columns have that maximum error, measured as count of float values of the error. The value 4989 means the mantissa had like 12-13 lowest bits incorrect.
Source code is there: https://github.com/Const-me/SimdPrecisionTest/blob/master/rc...
Not particularly readable because I’ve used AVX2 and OpenMP, however this way it takes less than a second on desktop, and maybe 1.5 seconds on a laptop to process all of these floats.
In this case (games), they probably made the right tradeoff despite the bug. But in general? I don't want to rehash the argument, I'm just really, really glad I'm not the one flipping that switch (or subject to the hordes of HNers who think I'm a monster for not flipping it).
The IEEE754 floating point representation gives you an easy way to roughly approximately convert between the log2 of it. The exponent gives you exactly the integer log2 of the number, and the mantissa gives you a fractional linear term that you can drop onto the integer part to make it closer to the actual log2 version. This lets you do some fun things fairly easily.
x -> log2(x) -> -log2(x) -> 1/x
x -> log2(x) -> -0.5 log2(x) -> 1/sqrt(x)
x -> log2(x) -> 0.5 log2(x) -> sqrt(x)
The lossiness of the conversion limits your precision, but there's a lot of times you don't give a shit. So the instructions are still valuable even if they're only approximately correct. For the reciprocal case, it will do something like this:
I'm not sure where the nondeterminism comes in between AMD and Intel. As you can see, with the reciprocal, there's no magic constant like there is with inverse square root. Maybe they're fudging something, or have a different form of Newton's method, I dunno.
An initial thought I had that was that their table lookup is to find a magic constant that might nudge the result of the final value in the right direction. (instead of the magic constant 0x7F000000 that my code boils down to.) But that doesn't seem to be what my Kaby Lake is doing.
This is gonna bug me all night, I know it.
Technically they are. Practically many CPUs, especially older one, have non-trivial latency cost for passing a value between FP and integer ALUs, like couple of cycles each direction.
Ever wondered why there're 3 sets of bitwise instructions, e.g. pandn, andnps, andnpd which do exactly same thing with the bits in these registers? Now you know.
One possibility is that there's an accidental equality test on the path of a runtime (AMD) value vs. a value that is computed at compile time (presumably on an Intel CPU).
I’m guessing that the 3dnow! path was still being chosen based on a check like processor brand == AMD and processor generation >= x, assuming that 3dnow! would never be removed. This is something that has already been discovered being done in other games.
I'll talk about it in person with people I know at work or I'd mention it on my old blog - but I don't want to risk it to the "unwashed masses".
But maybe I'm being too cautious - it is pretty in-depht here. That alone might dissuade most of the shitposters. And the karma-requirements for downvoting seem sufficient.
More people mean a broader dilution of the level of discourse and knowledge, and greater and greater incentives to market / spam / propagandize.
ad free in the sense that there are no obviously in-your-face efforts at graphical marketing; HN is run by a strartup incubator and has a startup focus; flogging your latest release is the raison d'etre of the site.
It’s similar to recommending slate star codex to someone. Unless that person is comfortable with rationally discussing uncomfortable topics, they will think you a nut job who supports everything on the site.
Though that is obviously a guess (that this is what happens, not that time deltas are evil, that is fact :-P), though i've done something similar at the past for getting light on dynamic entities from a static environment, so some things did click.
pos += speed*delta
The solution is to change this to run
pos += speed
visible_pos = prev_pos*(1.0-inbetween) + pos*inbetween
inbetween = (now - last_time)/(1000.0/interval)
The drawback with this is that in the worst case (inbetween=0) the visible output is one frame behind, though without any form of framecapping (including no vsync) it this should rarely be the case and with 60Hz intervals it shouldn't be visible even on 120Hz+ monitors. But if you want to avoid this you can have some systems use the current position instead of interpolating them, e.g. have the camera's direction in first person games bypass the interpolation so that when the user moves the mouse they get the most instant feedback.
Personally i've used this approach on my last engine and it has buttery smooth and instant response without any perceivable lag or issues even on a 120Hz CRT monitor (pretty much all modern flat panel PC monitors have response time issues that can hide frame latency issues, but nothing stays hidden on a fast CRT :-P).
But the tiny deltas are one issue. Another is that you get variable deltas and they can go from big to small (even capped) from one frame to another and this can still introduce numerical instability and harder to spot heisenbugs.
It might sound like a PITA to keep the previous state around but you really only need it for things that can't be calculated on the fly (e.g. position that can arbitrarily change) but others can be calculated (e.g. particles) and in the long term you are making a more robust system and saving yourself from having to chase after weird bugs. By having everything run at fixed updates you know that once you see something working, it'll keep working regardless of the framerate.
The correct solution is to run the simulation on a different thread from the rendering, so that the simulation can be run at an appropriate frequency and the rendering can proceed at whatever framerate the user's hardware is capable of. The more commonly used "solution" is for the game to run the simulation and rendering on the same thread, and cap the framerate as a way to indirectly cap the simulation update frequency. Occasionally you find a game that merely assumes that the framerate won't go over 60Hz, and if your monitor is faster, the game itself runs in fast-forward.
I think very few games run the physics in a different thread, that sounds like asking for trouble.
I don't believe anyone was asserting that how it's usually done in practice is the right way to do it. Developers obviously have a strong preference for easy over correct and flexible.
What you describe is a reasonable compromise for developers who are afraid of multithreading even with a straightforward producer-consumer data flow. It comes with its own complications, like having to buffer input events along with their timestamps to apply them during the right catch-up iteration of the simulation.
1) How do you deal with hardware that can't run the simulation at the appropriate frequency?
2) How do you keep simulation and animation smooth and linear over time when facing processing oscilations if not by using time deltas?
3) Is ther a graphics/processing demanding game that doesn't use time delta?
You keep animation smooth by decoupling it from physics; the output of your physics engine will generally include at least linear and angular velocities you can use for interpolation. This kind of thing is necessary anyway if you’re running your physics simulation on a server and have to communicate to the renderer over the network.
> The correct solution is to run the simulation on a different thread from the rendering, so that the simulation can be run at an appropriate frequency and the rendering can proceed at whatever framerate the user's hardware is capable of.
The "running in another thread" is a way to enable running the simulation at a fixed frequency where it will produce "correct" results, while not imposing the same constraints on the rest of the game engine that are not as sensitive to timestep issues. I never said that moving the simulation to a thread of its own is the entire solution itself, and "increasing performance" isn't the goal or result.
In fact, a myriad of games (and other kinds of simulations) have been doing exactly that for a very long time.
To start it off, Kingdom Come: Deliverance was a big deal for me (though not sure if I especially liked the story or just that it didn't get in the way), and Planescape: Torment is a favorite too (still remember the wonder with which I explored it the first time).
This is not correct nowadays (it was surely so 15+ years ago, very roughly before the introduction of walking simulators and/or the expansion of the indie market).
Regardless, games that are strongly centered on narrative (writing) don't require a huge investment, so I think you'll mostly (or even exclusively) find them in the indie area; action/RPG games also are not the best in this department.
I suggest to start with "What remains of Edith Finch" - you won't be disappointed, assuming you don't necessarily look for action/RPGs.
edit: Just remembered that Planescape Torment was created by Black Isle, the studio that also worked on the first two Fallout games. Black Isle later dissolved, but the key players later formed Obsidian, the studio that developed New Vegas on Bethesda's behalf. I imagine it's very likely you've already played FNV (in which case, I'd love to hear how your experience aligns with mine), but if not, it's probably up your alley.
Decent writing, for a game, but the isometric view isn't great and the gameplay is clunky. Plot and theme-wise it holds together better than Fallout 3 -- which has plot holes so big you can drive a bus through them -- but FO3 was a lot more fun to play, at least the first 2-3 playthroughs, and has so many freakin mods it's a different game.
Give Disco Elysium a go. The writing's good, but how it uses Gameplay to explore that writing is what makes it great. Similar Amnesic setup to Planescape with interesting dialogue mechanics.
My favorites in terms of writing:
- Spider and Web
- Superluminal Vagrant Twin
- The Dreamhold
- A Mind Forever Voyaging
Over the last several years of very occasionally playing interactive fiction, I've been particularly impressed by:
- Cactus Blue Motel, by Astrid Dalmady. 2016. A coming-of-age story with a bit of magical realism, written in Twine. Highly accessible, and it takes just minutes to give the game a try: http://astriddalmady.com/cactusblue.html
- Chlorophyll, by Steph Cherrywell. 2015. Also a coming-of-age story, but mostly a rip-roaring scifi adventure. Could well make a good introduction to the more modern views on interactive fiction.
- Coloratura, by Lynnea Glasser. 2013. Carpenterian horror from an unusual perspective. Swept the awards in the IF community when it came out.
- Eat Me, by Chandler Groover. 2017. A twisted fairytale that's thoroughly obsessed with food - the richer and the more varied, the better. A great showing of how much a writer who's willing to go far enough with it can do with prose style.
Maybe I can recommend games with a strong atmosphere instead. Like Morrowind, Deadly Premonition, Sleeping Dogs, Kentucky Route Zero, Silent Hill 2/3. Nier Automata if you like anime (which I don’t, so I didn’t find it as thrilling as many game reviewers seemed to).
The games you mention all seem to be games with options for dialog which if you think about it is hard to write for, games are the only medium where you're essentially choosing your own adventure and the writing has to be able to account for any/all choices and still end up leading you towards some kind of conclusion that makes some sense.
The second one though, didn't really click with me, and I stopped playing the series there.
Maybe with EA games being on Steam now, I could give the third one a try.
ME3? Failure from start to finish. The opening scene just basically says “the last two games were completely irrelevant”. The ending is not only unsatisfying, it relies on stuff that was barely established even in the first game and not even mentioned in the others. It’s a textbook case of failing to stick the landing.
(And the less said about “choose your favourite primary colour” as an ending the better. Should have just had one ending and made it a good one.)
There are some sequences that felt tedious but the story is definitely the best part overall.
ME2 is definitely the apex of the series by modern standards.
Also you forgot to mention ME:A. Maybe your face was tired? ;)
Funny, that difference is exactly what I most disliked about ME2.
It is optimized for two things, Battlefield and FIFA, and it is even cumbersome for that (see: BFV's lifecycle and the numerous long-standing bugs that plagued it). It was never designed for an RPG and things like inventory systems and facial animation did not exist and had to be invented (badly). But EA is all-in on the "everything has to run frostbite company-wide".
Having worked on multiple Frostbite and multiple Unreal games, both engines are capable of building a wide variety of games. The discrepancies in my experience aren’t technical but organizational. It is hard to compete with the scale of Epic’s developer support organization and the wider industry inertia around their technology.
I really enjoyed the first Mass Effect and its three-way balance between storytelling, exploration and combat.
I was really looking forward to playing Mass Effect 2, but its storytelling didn't seem as good as the first game and exploration was almost non-existent. ME2 is more focused on combat, which seems to be what most people want (ME2 gets fantastic reviews), but to me that's the least interesting part of a game.
I never played Mass Effect 3, but I hear it is even more combat-focused.
But by the time you get to Mass Effect 2, that world is already built. So ME2 instead spends its time telling a more contained story in that world. It's a valid choice, even if not your cup of tea.
ME2's story works because it's a relatively well executed Seven Samurai-style plot. We've seen this story many times before. It's a classic. Sci-fi stuff threatens the galaxy, assemble the crew, watch them love/hate each other through tough challenges, some don't make it through the epic conclusion, etc. Standard. For a lot of people, it's fun to relive those story beats in an interesting new world (Mass Effect) in a novel format (video games).
I think of ME2 as a smart and enduring example of video game writing craft. They were juggling a lot of different requirements: player choices affecting the story (including from the first game); a huge, complicated new setting; making the story accessible to new players without needing to play the first game; developing interesting characters; supporting multiple protagonists (male/female, paragon/renegade) and varied character interactions based on those attributes; supporting high production values (all lines voiced by real actors); fast development timeline; on and on. Hanging all of that on a familiar plot structure probably brought a lot of structure to what was otherwise a pretty chaotic project. Furthermore, what ME2 does better than other games in the series is let characters drive the action, rather than dragging characters from event to event. The writers did an amazing job on ME2 in context.
It really is.
It's a tremendous shame that ME 3 ruins it.
The game just feels like a chore. For one thing, the environments are too big. There's a lot of walking to get from one plot-point/action-sequence to the next. Second, the combat is bad. AI, controls, weapons that don't shoot where you point them until you level up several times to upgrade the skill. The conversation trees often feel like I'm just going through the paces of uncovering Codex entries for XP, and the long, pregnant pauses between dialog portions as the game loads up the animations for your responses is super annoying. It's an exercise in frustration and it really harms the storytelling aspect. I just never feel like I'm in any kind of flow of hearing the story. It feels dragged out (and I even have bug-fixing and fast-elevator mods installed).
I'm a fairly middling FPS player, and by that I mean I usually rank in the middle of online matches against humans, and I can usually finish single-player campaigns on "hard", if I have the patience for it on those days. But ME1 has been so consistently frustrating me that I've about decided to quit.
What changes in ME2 made the combat vastly better than ME1?
The second game used ammo clips and that made combat far easier.
I also keep hearing ME1’s combat described as “clunky” and ME2’s as “smoother”. Without much experience with this type of game, I think I just don’t have the skills to feel the difference.
Thanks for reminding me of it! I loved it. I didn't finish it though and might come back to it.
Direct narrative in games is very tricky, because it can conflict with player agency.
I think people forgot that Last of Us 1 wasn't a satisfying ending either, aside from being able to empathize with why people did what they did.
In that light, Last of Us 2 really doubles down on making you feel dissatisfied, and I like that such a possibility mirrors real life: irrational, striving for redemption, but not necessarily getting it.
I hate that I had/choose to experience this in someone else's shoes. I love that they choose to portray it. The feeling is almost analogous to watching The Road if it ended with the adult's perspective, or how Children of Men kept me entertained, on the edge of my seat and able to admire the technical prowess of the intense long scenes with no camera cuts, but not sure if the resolution was a resolution at all. And how that's life.
I found the story of the game to be very nihilistic, and that just doesn't make for something that feels good, but portraying that took real effort which I can appreciate.
My litmus test is asking myself "If this game had a different name and wasn't part of a franchise I liked, would I appreciate it" and the answer is yes, a resounding yes.
In a similar sci-fi vein I've just started on Detroit: Beyond Human, which I've got high hopes for, story-wise.
Tie Fighter. If I had any doubts fighting for the Empire, they were all gone in just five of six missions. After that, I wanted to blast as many traitorous rebel scum from the space as I could!
Betrayal at Krondor. One of the best RPGs I ever played. Writers could challenge R. Feist himself!
Full Throttle. The finest LucasArts quest ever, with great characters and story.
Hyperdimension Neptunia games made after 2013.
I would love to see games with emotionally involving improvisational acting with A.I.
characters but we're not yet there yet.
Social interactions in virtual reality can can you into an intense places: characters can invade your personal space (to the point of causing an adrenaline dump) or make you feel uncomfortable by keeping too far away. Will we see something that is halfway between "Frog Blender" and "Ender's Game?"
Beneath a steel sky
Ico (I don't think there was much of an explicitly stated story, it's mostly atmosphere)
The Shadowrun RPG setting is very 90s but if you like that sort of thing the Dragonfall game a couple years back was quite well written - I had no prior experience with the RPG world but enjoyed it because of the story.
But I agree with you, it's very difficult to find really well written games. Otherwise games that are closer to the old point and click adventure game genre have generally better writing: Gone Home, To the moon, The Book of Unwritten Tales, Overclocked an history of violence to name a few.
Not sure if "worldbuilding" and "writing" are the same, or whether resources invested in the former leads to shortcuts in narrative, etc.
For Mass Effect, the main storyline seemed to be beholden to Video Game "Boss" requirements, but the side stories and character stories allowed whomever was assigned to those, to shine...
ToDo's on my list:
- The Last of us 2
- Ghost of Tsushima
Each MGS game is essentially an epic interactive movie[^], with individual cutscenes lasting up to an hour.
Granted not everyone likes Kojimas style, but those who do won't regret their time. Best played sequentially from MGS2.
[^] MGS5 is a notable exception as an open-world style game and an abrubt ending, ultimately leading to Kojima leaving Konami. Still good fun and brings some of the stories from earlier games together.
As for "Kojima's style", it's basically "AAA Japanese Game Studio Style", which is "tell a coherent story through the first half, then go completely off the rails in the second."
You might look at it and think "Robot dinosaurs in a post-apocalyptic world where humans hunt them with bows and arrows? Sounds like good dumb fun, even if the writing is preachy and nonsensical." Thing is, the writing isn't preachy and nonsensical. It's unexpectedly excellent. They did something unusual:
1. hired a good writer
2. early in development
3. listened to him
and the results are phenomenal. No caveats necessary -- the story has none of the Mass Effect "strong limbs, weak backbone" issues.
Also, while HZD used to be a PS exclusive it will be on steam in a few weeks.
Disclaimer: I'm not a person to dwell too much in the specific wordings in games, mostly valuing worldbuilding and the immersive aspects of storytelling, but I do pay some attention to it.
Divinity Original Sin 2 has some excellent world building, and while the overarching story might be a bit cliché "chosen one"/"zero to hero" type deal, the moment to moment narratives are pretty well made, and the quest lines are very well tied, not only giving you stuff to do, but giving opportunities to learn more about the world.
And also perhaps anything made by Supergiant is a good pick, in your case I would specially recommend giving Pyre a try. I held from playing it for the longest time because I was skeptical about the in-story "sport" gameplay, but it is very well made and perfectly enhances the narrative.
Mass Effect bugs aside, this is interesting!
Before this article, I never knew that DirectX (D3D) commands could be proxied from PC to PC; I think that's a great capability!
Also, if that's the case, and apparently it is, then it would seem like you could do something like X-Windows/X11 but for PC's running Windows over a network by proxying D3D commands... And of course, if Microsoft wants to be proprietary about that, then the same thing could probably be done with open source software using OpenGL commands, that is, proxy them over a network connection to gain an X-Windows like effect, if I am understanding the underlying technology correctly, or am I mistaken?
Khronos just does specifications and then lets its partners come up with actual tooling, which means that you end up with OEM specific SDKs most of them very thin in capabilities.
What has happened here is that a new implementation of these fast math routines appeared that returned results that were unexpected by the game engine and the engine was not robust enough to deal with these variations. This is not too surprising as these AMD CPUs did not exist yet when the game was developed so QA will not have tested the game's compatibility with these CPUs.
The solution was to divert calls to `D3DXMatrixInverse` to another matrix inversion routine that makes use of more accurate floating point math, which produces identical results on all tested hardware.
I don't think the real direct3d binaries are used by default anymore, unless you go out of your way to configure it that way.
Radeon Tech Group's in-house software support has always been abysmal since the days they were called ATI. It's been a chronic problem for both their drivers and their GPGPU ecosystem, NVIDIA can afford more engineers and better engineers to develop libraries that support the ecosystem and to make sure that everything works properly on their hardware. AMD's greatest successes have been when they get the open-source community to maintain and develop something for them.
Yes, AMD is operating on a much smaller budget but in the end it doesn't matter too much to the consumer when they can't play Overwatch for 9 months because AMD has a driver bug that causes "render target lost" errors leading to competitive bans for repeated disconnects, or... whatever the fuck happened with Navi.
Part of what you are paying for when you buy a graphics card is the ongoing software support, and AMD has always fallen flat on their face into a dumpster of rusty used syringe needles in that department.
That definitely needs some evidence to back it up. In my experience, most game rendering code is hot garbage that has been hammered just enough to work on the tested platforms (read: mostly nVidia).
> They opened the linux driver enough that others can do a lot of the work for them.
While there are outside contributions, most work on radeonsi (OpenGL) and the amdgpu kernel driver is done by AMD employees. The AMD Linux driver is better because it has less legacy code, can share more work with other drivers, can benefit from users who are more used to filing detailed bug reports and test development builds, and yes because users and other interested parties (Valve, Red Hat, ...) can contribute fixes for their pet issues - but it is still AMD doing most of the work.
For Vulkan on Linux with AMD graphics the most popular driver is entirely community developed. But AMD's Vulkan driver also works from what I hear.
Oddly, while I agree that ATI/AMD's 3d drivers have ranged from 'ok' to 'dumpster fire', I remember a time when their AIO (i.e. VIVO/Tuner/3d) boards just plain worked (aside from not very good 3d performance.) Perhaps their driver team couldn't adapt.
Now, in terms of pure price-performance, i think I'd want to buy Ryzen but... this sort of stuff is what scares me off AMD. I just want my crap to work. Reading some of the comments here, one commenter suggested there's an instruction to find an approximate determinant of a matrix where both Intel and AMD are standards compliant but those instructions produce different results on each chip.
Of course I don't know if this is true or not but saving $200 on a PC build is just not something that justifies (to me) dealing with kind of issue or, worse, potentially dealing with issues like these.
I buy NVidia for pretty much the same reason.
This kind of issue.. you mean a visual glitch with limited scope and available work arounds in a more than decade old video game?
Yes, what a serious issue. /s
They are not incorrect that there is a certain turnkey nature of using Intel, and certain merits to using a core that has been basically only incrementally refined for the last 10 years.
And yes, Intel has processor errata too, but AMD had to work through some major ones because it was a brand new architecture. They also chose not to take corrective action for some rather major ones - the fix for the segfault bug should have been disabling uop cache, or to do a recall, instead they just let people go on thinking the intermittent crashes they experience (including in windows) are software-related. They entirely declined to patch the Ryzen Take-A-Way bug, which leaks metadata about page table layouts and breaks KASLR on their processors, leaving users even more vulnerable to spectre v2. etc.
I had a 3770k in a previous machine, which suffered noticeable performance hits with the software mitigation applied.
Browsers immediately mitigated via other measures, and I've never read about any criminal network choosing to crawl through memory for credentials (that might even be stored encrypted) as opposed to just dropping some malware and keylogging.
It's a big deal if you are a cloud host or your threat model includes state actors, not for joe public worrying about his bank credentials or his CS:GO knife skins.