You can implement spinlocks in userspace under specific circumstances. You will need to mark your threads as realtime threads and have a fallback to futex if the fast path (spinning) doesn't work out. And even then you need to benchmark on multiple machines and with realistic workloads (not microbenchmarks) to see whether it's actually worth it for your application. It's complex, linus goes into more detail here
Care to elaborate upon this statement?
> "Linux pessimizes for high-performance games atop its kernel."
Games can run at quite a high performance (which feels like a really odd statement) atop the general purpose Linux kernel. Even with the windows-specific APIs being emulated with Wine/Proton.
There are many modern games, including Doom 2016 as I mentioned below) which run beautifully on Linux.
I wouldn't redo the work of people who have literally written books on the topic (https://books.google.com/books?id=1g1mDwAAQBAJ&pg=PA319&lpg=...). Copyright 2019, so if it's not conventional wisdom that spinlocks are the right tool any longer, the "best practices" in game education are still teaching them, and OS's may need to account for the code actually written, not the code they wish were written.
I don't think this should be asserted in such a black-and-white way. Yes, platforms need to accommodate the programming styles of users. But accommodating them blindly, when better solutions exist, is how you get bad APIs that age poorly.
In platforms that are aiming for long-term stable APIs, it may make more sense in some cases to prioritize the cleanliness of the API over developer patterns, given that developer patterns are easier to change.
> OS's may need to account for the code actually written
I absolutely disagree with this idea, particularly as it's related to spinlocks. Spinlocks are such a low level, inefficient, method of acquiring a lock that using them as a de-facto method of acquiring a lock is a terrible idea.
Especially as more and more gaming occurs on devices with batteries. Any moderately sane OS will treat the resulting device heat increase and increased battery draw as a trigger to degrade CPU performance, hurting the game performance even more than using a sane lock acquisition method.
> "but that won't happen and gaming on linux will be too niche and will die."
To be quite frank, that sounds like a satisfactory outcome to me. If Google wants a different status quo, they should throw more money at the problem rather than telling game developers to moan about it. You don't hear game developers moaning about this sort of shit when porting their games to the FreeBSD-backed PS4, presumably because Sony, unlike Google, actually gave a damn and made sure the platform worked and made sure developers knew how it works. If the requisite work has already been done on FreeBSD, then maybe Google should have decided to use that instead. Either way, it's on them. They made this mess for themselves.
What you can do is make mutexes perfectly adaptive so that they switch to a blocking wait as soon as the lock owner itself gets preempted. I have a proposal to do that here: https://lore.kernel.org/lkml/CAKOZuesa338sc_=w6-wvro25idrSN_...
You should always use waiting primitives when possible instead of just spinning. If you tell the kernel what you're doing, the kernel can help you. If you hide the wait DAG from the kernel, it can't.
If people are learning bad practice from the texts, and the bad practice hits Linux's scheduler asymmetrically hard (relative to other game platforms), one can expect outsized impact on the performance of games atop Linux architectures.
It may be useful to explore why they teach those techniques before passing judgement on a whole ecosystem of educational material.
We may need updated books.
"Don't use spinlock" is like "the user is holding it wrong" for API. The user can either choose to modify their behavior or throw up their hands and not port to Linux.
Fundamentally, you need to tell the kernel what you're doing so it can help you. There's no fix here.
And given the over-arching topic of performance in Stadia, it may behoove Google to fork Linux and specialize the scheduler implementation to be more spinlock-friendly. Especially if Linux's benevolent dictator for life is telegraphing he doesn't consider it an interesting problem to solve.
No, it cannot be fixed, because the kernel simply does not have enough information in the case of naive spin locks to make good scheduling decisions. I understand the Chesterton's-fence argument in favor of learning why game developers use spin locks before telling them to do something else, but in this case, I really think simple ignorance is the most likely explanation.
You never want a spinlock in userspace. Period. An adaptive mutex will give you what you want without the pathological downsides.
Look: your whole line of reasoning here is just bizarre and invalid. You can't say "well, maybe there's a reason they're doing it!" without actually backing this claim up with specific reasoning. Sorry, but you can't keep using your spinlocks even if they "work" on other platforms. If they "work" there, it's an accident.
If it cannot be fixed, then why is it only a problem for Linux?
What if you've already used them in code that runs in N other architectures, the game works fine in those architectures, and Linux is architecture N+1, which you're trying to decide to support or not?
The less the code has to be changed, the easier it is to support architecture N+1.
Yes, because that's not the main focus of a general-purpose kernel. Which doesn't mean there aren't ways around it if your objective is running Linux for a specific application.
The real time patches are in the mainline kernel since some time ago, but there are other options for softer RT at the kernel as well.
I got to use that quote at some point. Pure gold.
If you are not Linus, you might come off as a major asshat. Careful.
Wine/proton and the accompanying libraries are pretty f'ing amazing.
I tried the same in 2018 just for fun and I wasn't able to run Skyrim, there was always some minor issues that made the game unplayable, like mouse not working, crashing on certain screens etc. I essentially kept trying older versions of Wine, and eventually found some version that made Skyrim playable (still not as flawless as it felt in 2015, but playable). Idk if it was my mistake, or some archlinux lib weirdness or something else, but I had to try dozens of wine versions to find something that worked. This is fine for me, since I wasn't intending to play Skyrim, I was just trying to run it (for the "WOW" effect), but I imagine people who actually need Windows software might have been frustrated by this.
> I'm not a gamer so I can't tell the difference between 30fps 60 fps etc
It's like driving a car with a rough idle vs. a smooth idle. You can get used to driving a car with a rough idle, and not particularly care. But once you've driven both, you can identify which is which pretty easily, and the rough idle starts to bug you when you know it can be smoother.
Everyone who has played more than one video game knows that some just feel better than others. In one, your character is fluid and nimble, and in another, slow and clunky.
This is the result of many things, but a large portion comes from input lag and frame rate. More advanced users aren't necessarily more perceptive, they're just better at pinpointing what they're reacting to. (And in some cases, conscious of how an experience could be improved.)
This is similar to the study of any type of art. Whereas a casual observer might look at a painting and say "meh, I don't like it", someone who has studied art pedagogically can describe precisely what isn't working for them.
Heck, there are people keeping older PCs around just for that, so you can e.g. run your Skyrim LE with Win7 and an older DirectX release etc.
"Retrogaming" is quickly approaching a very small time delta.
I guess that's the price you have to pay for performance and availability. Games shouldn't be that brittle, but any fast-paced industry is likely to run into those problems, and with high-speed internet patches and a GPU duopoly, fixing it is too easy.
I can only complain about some minor font rendering issues and the lack of child window rendering  which is frequently used in programs like these.
Once child window rendering is fixed, I will actually be considering ditching windows entirely and go wine-only.
In the pre Windows Vista Era, the operating system handed control over to the App to draw itself on screen. In Vista they moved to a model, like OSX, where each App drew to a buffer and the OS composited those buffers. That's part of the reason why Vista was hot steaming garbage because they were still working that transition out.
It would be interesting to see the difference in performance between running the native version and the emulated version.
This is the same as Xbox and PSnow, etc. You have to pay for the service via a game pass or pro membership on top of buying the game and possibly the system before you could stream it.
Even in the case of shadow, you'd have to pay for the service to stream games you'd buy and access off your own/their system.
They stream spotify because they don't want to buy songs and after that current song is no longer popular they move on to another one and forget the first song ever existed. They play video games and when they are done they sell them and buy a new one, never wanting to revisit the old one. They spend $50 on the new game and sell it for $15ish. For that mindset of people this is a great product, along with the digital only xbox one.
I am not implying all kids or youth are like this, mine are and a bunch of their friends are. I prefer gog or a physical copy, but then again i don't buy that many games.
> But the issue isn’t bandwidth, really, is it?
No, bandwidth isn't the issue, I believe the op was referring to Latency.
I have 75mbps DSL and the Latency is garbage with 29ms being best case. I get frequent periods of 90+ms latency.
My "ping" on games like Battlefield are in the 150+ms range and I frequently get high latency dropped from matches.
When I had Cable my latency was around 15ms and my ping times were 90-120ms in Battlefield.
In normal online gaming you do not have that issue. When I move my crosshair in cs:go it moves instantly.
Check out some of the Games Done Quick runs from last summer on Youtube, for example. A lot of the FPS games especially look dreadful. I'm sure Stadia can do better, but there's only so much bandwidth you can throw at this problem before it becomes untenable: https://twitter.com/dada78641/status/1207751665752911872
To me, the main issue with Stadia is not the product itself but the massive amount of extra CO2 that's generated trying to solve all these issues that were already solved decades ago by having your own cpu in your own personal computer in your own house.
There's also a chicken-egg effect that Google's still interested in for secondary reasons. Reverse the question: assume Stadia takes off and becomes the "killer app" of gaming (big assumption, but still). Will local municipalities continue to tolerate crap networks if it means they're left out in the cold on this thing? Google's still interested in improved network infrastructure and has the clout to play incentive games to make that scenario more likely.
Of course, it gets really hairy really quickly. Technically, a perfect runahead/negative latency requires ([all possible input states] to the power of [amount of frames of lookahead]) frames to be rendered.
A single player NES game has 8 binary state controller buttons, so 2 to the power of 8 input states. You need to render 256 frames for each frame of runahead.
Now surely you're thinking you'll just prune the set of possible input states through smart prediction of what the player will probably do next, but it doesn't matter. The number very quickly grows to ludicrous amounts of processing for anything that isn't an antiquated console. Let alone for 3D FPSes that use float values for e.g. mouse state.
Edit: and I should add that if they even try to do this, the amount of CO2 they'll waste is going to be horrifying.
Local conditions (US for example) are not necessary a showstopper in Japan, South Korea or parts of Europe. Low latency extremely high bandwidth is the new playground.
I mean, you should get 50 Mbit/s from very affordable mobile broadband. If you have ASDL landline, you can get hybrid solution (combined mobile + landline that works as one) and you should get 100 Mbit/s
I've played Destiny, Thumper, Ghost Recon, Tomb-Raider etc on it and only Thumper was tricky when I tried on a pretty crap connection.
Seems a bit dismissive when MS, Sony, Nvidia, and Google etc having built streaming services that consumers have enjoyed.
I'm sure plenty of people in this thread have fiber but Hacker News is not really the target audience for Stadia.
35 megabits per second: 4K, 60 fps, HDR, 5.1 surround sound.
Consumer internet is fine for this in a lot of the developed world.
It's surprising how much latency a human will tolerate without notice, however.
I think that's only if we ignore monitor refresh rates. Most of us are maxed at 60 FPS which with a lot of hand waving is equivalent to ~17 ms latency. It's relatively easy to get a network with latency better than that. If you can achieve the rendering + data round trip in under 17 ms you're going to be functionally equivalent to a local computer.
So, I'd say that I'm running the best case scenario available within my area.
My anecdata counterpoint is that I pay for the cheapest/slowest broadband available in a midsized city and my ping to google.com is 14ms.
Product doesn't have to cater to every single place on earth to be viable. It's like arguing that ARPAnet was a terrible product because not everyone had T1 lines.
200mb uncapped internet in the UK is super common and pretty cheap (£30-£35 a month), as it is in most of western Europe.
I know this is an america-centric website but please don't apply your standards to us, its terrifying.
This isn't accurate coming from someone living in London. Most of my co-workers do not have access to "200mb" internet and most common is 25mb or less. I personally only get 18Mb on my home connection, i.e. roughly 2MB/sec and this was the best I could get from any provider.
The state of Internet infrastructure is fucking terrible in the UK given how small a country it is. Coming from California to the UK felt like going back in time in terms of Internet speeds.
If you can stream video at decent quality, you have the bandwidth for Stadia.
Figure 2: "30 Mbps coverage in all Member States in 2011 and in 2017" https://op.europa.eu/webpub/eca/special-reports/broadband-12...
(+) only slightly exaggerating
Germany is pretty bad when it comes to internet speeds. Broken market.
I assume France is in the developed world as of 2020 and I believe I've more luck wrt Internet than most people here.
I don't like Stadia for other reasons but saying that consumer access is not capable because you live in a country that sucks on that part is not really a good argument.
Do you think modern cars are a bad idea as well because there are parts in Africa that doesn't have good roads?
Unless you have fiber, it's not worth it. Some DSL internet still suck today.
You don't need fiber. Feels as good as local play on my 220 Mbps down/15 Mbps up internet.
anyway most users don't have such a high bandwidth...
And I don't think its particularly greats bandwidth... just about every medium/major metro in US is going to have comparable internet or better.
They control the whole box, it should be more like a console than a regular PC.
Skarupke inaccurately defined this as 'mutex vs. spinlock' and an ongoing debate surrounding this as they are comparable, when in no way are they comparable as in userland you simply cannot use spinlocks.
The Windows scheduler apparently doesn't do this, and handles spinlocks better; whatever that means. But the way it works is that spinlocks should work poorly if you run them poorly. Full manual-control of when running a spinlock in userland, you will be scheduled by other things, and you should know that.
That's what Linux is really about. The ability to do something and not have the system "better-ize" it like Windows does by doing some black magic and running an unknown process alongside your spinlock to make it run better.
A spinlock run in userland SHOULD be scheduled by other things, as Linus says. As Linus implemented in the kernel.
Games often integrates the message loop into their game loop. In their loop, they poll to see if any new message has arrived, and if so handles it. In any case it does it's game thing.
If there's not much to do, because the game is paused when it's minimized for example, this spins much like a spinlock spins. If, in addition, it has some other threads it's synchronizing using spinlocks then yes, it can cause it to burn several cores worth doing nothing.
Better games handles the loop differently while the game is minimized, for example doing the default wait-for-message variant.