Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft Flight Simulator's cloud debut comes with upsides for devs (gamedeveloper.com)
79 points by jamesdco on March 7, 2022 | hide | past | favorite | 71 comments



Do these cloud gaming offerings really make a profit with current hardware costs?

I feel like large companies are pouring money into this with the hope that they will be able to lock users into their cloud offering. However, it is likely they are losing huge amounts of money because of the hardware costs.


Do these cloud gaming offerings really make a profit with current hardware costs?

Judging by the ones that have gone broke, no.

"Cloud gaming" of the kind where the game runs on a server and the results are sent as video to the client has been tried, on and off, for about five years. Most of the early providers are gone. The trouble is that each user needs an entire server, one comparable to a gamer PC.

Cloud gaming services seem to come in two flavors - expensive, and loss leader.

Shadow PC is in the expensive category. Price is $30/month. When you connect, they launch a server for you and load your environment into it. So you're buying a part-time VM. Apparently you can stay connected as long as you want, although I'm not sure if there's a limit they are not mentioning up front.

NVidia GEForce Now is cheaper. Price is $10/month. You get kicked off after 6 hours. For $17/month, you get a better server with an NVidia 3080 and 8 hours before being kicked off. There's a free tier, where you wait to get in and get kicked off after an hour or two. Originally, the prices were lower, but that was in the loss leader phase. It also helps that NVidia makes GPUs. You can only run games that NVidia has ported to that system.

Google Stadia is $10 a month. Games have to be ported to it, and there's suspicion it may soon join the long list of former Google products.

Vortex's site now says they are no longer accepting new users, and their blog is a Google Stadia ad.


Keep in mind that the base tier of Google Stadia is completely free when it comes to free to play games like Destiny 2, or if you buy the game outright.

I still play Cyberpunk 2077 on the free tier of Stadia due to a promotion where the game was only $45 and came with a controller/chromecast. Sold the hardware to pay for the game and now I just play with an Xbox controller on whatever computer or phone I'm on at the time.


>Shadow PC is in the expensive category. Price is $30/month.

That's surprisingly inexpensive. The graphics card they promise is nearly $1,000 by itself. For what you'd spend on a DIY build you could probably get 4-5 years of subscription.


It's quite a bit more than 5 years[1].

[1] https://en.m.wikipedia.org/wiki/OnLive


I am very amazed with GEForce Now. I thought the lag would be unbearable. I tried some fortnite and i'm not even a gamer and even won some rounds. Astounding. If it's that good for fast FPS then it's a no-brainer for slower types of games. Would gladly pay 50€+/mo if i was a gamer just so i wouldn't have to bother to keep upgrading my pc.


Fortnite is not an FPS, its a slow paced gamepad friendly survival base building game.

'Tim Sweeney described it as “Minecraft meets Left 4 Dead.”'


> You get kicked off after 6 hours

Is that 6 hours of continuous time, or does it indicate a quota of 6 hours total for the month?


Not 100% about this guess that I have but if its "free" it probably means that they calculated that they have x% extra server availability and they provide that for "free". Google Colab makes it obvious when they provide you a GPU but I don't know about Microsoft.


Exactly what I was thinking, they can just use spare Azure capacity.


The only problem is you need powerful machines with GPUs which will only be a small subset of spare capacity.


I think the primary technical challenge is the scale of AAA games.

For me, this stuff gets viable when one game studio goes all-in on building a streaming-only game experience that considers all economics of scaling GPUs.

If you reduce visual fidelity enough, you can get away with a lot more clients per host. Hell, depending on how you architect things, you might even be able to serve some clients from servers that dont even have GPUs.

The upsides of cloud gaming are pretty solid if you can get over the caveats. The biggest thing for me would be competitive gaming experiences that are guaranteed to not have any cheaters. That would feel really nice.


On similar note, where is HW going once it completes its lifecycle in the cloud?

Are pods being donated to schools?

This would be a great idea, is to give a pod to a school with management tools etc and teach the kids how to interact with effectively [aws/gcp/azure] mgmt tools to build up a skillset early and accelerate how the yoots understand what cloud is and how its managed.


I could be wrong, but I think the hardware cannot sustain outside their servers farms without significant investment in the things needed; it's not like the throw away entire server racks at a time, the probably only retire small pieces of the hardware at a time.


Microsoft is using Xbox Series X. Nvidia is using their own GPUs.


Here's what the Xbox cloud hardware looks like: https://www.youtube.com/watch?v=G5g4Xqy8kG4&t=18s


Is this essentially 4 xbox's crammed into a 2U server?


It is 8 xboxs in the 2U chassis apparently (they're stacked two high)[0] analysis [1]

[0]: https://external-preview.redd.it/Ma7bVRkbUuoM4xhzbJFe3qdTzAZ... [1]: https://www.reddit.com/r/xcloud/comments/gdp48d/project_xclo...


Yep, and this is one of the reasons why devs don't care about Stadia, with NVidia and Microsoft offerings they basically have to fine tune existing code to cloud workloads, not rewrite it from scratch betting into a vendor that is know to kill products.


Curious where you work or who you know that gives you the strong insight that developers don’t care about Stadia. Can you share?


Where I work doesn't have anything to do with games.

I follow game development since the 16 bit demoscene days when I used to hang around with crackers, switchers and ProTracker musicians, read Hugi, then I graduate to being a IGDA member, Gamasutra subscriber during its existence, consumer of MakeGames and Develop, attended a couple of GDCE events, having some interviews at well known console manufacturers and eventually deciding that I would rather keep doing boring enterprise consulting than suject myself to game industry working conditions.

Which leaves me with a network of people that are still in the industry, naturally there are opportunities to talk about Stadia and very few of them are impressed with Stadia's track record or its owner long term commitment.


Interesting. I used to crack games for Fairlight/Razor 1911, attended siggraph from 2001-2006 (even volunteered for the conference), worked at nvidia for 3 years on the windows GPU driver. I don't talk to anyone about games anymore. I have outgrown consoles but I play stadia on my TV because it doesn't require one. When I show my friends they quickly order a stadia controller and do the same. I also worked at Google in the same building as the stadia team (but not on stadia itself, or related. I don't know anyone who works on it personally)


I wonder how they implement fast loading times. Flight simulator is embarrassingly slow to load and takes 2-3 minutes on my reasonable fast machine. I guess they have fully loaded game instances on standby and then sign you in once you connect using the mentioned new cloud gaming API?


That's an interesting issue for the "metaverse". Once we get past the NFT clown car era, and big-world systems with user created content at high resolution come out, metaverse systems will have to face the bandwidth problem Second Life faces. Delivering assets to the user needs more bandwidth than delivering video. Even 4K video. Second Life delivers content on the fly, and you get to watch stuff appear as content comes in. With enough bandwidth and a gamer PC, it's not bad, but many users are on slow links with weak clients, and suffer badly.

If the work is being done "in the cloud", there can be much more bandwidth to the asset servers. At least 10GB between game machine and asset store, all within the data center. Flight Simulator needs that, because, like Google Earth, they have a whole planet of assets. A Ready Player One quality metaverse will have the same problem. Or, for example, the Matrix demo for Unreal Engine 5, where you download 16 square kilometers of highly detailed city.

There are performance downsides of cloud gaming. Too much lag, mostly. Speed of light alone is too much slowdown to allow remote VR rendering. At 120 FPS, a few hundred miles of transmission delay alone costs a frame time. Network delay makes it worse. You can't buffer the video ahead, like you can for pre-stored video. The pew-pew crowd gets unhappy above 40ms, although there are tricks for FPS games to make targeting work across laggy links.


Pre-loaded machines work, but I imagine you would have to move away from that once you support enough games, but it should be "easy" to just resume a machine to different games start states from disk. Afaik games are mostly using lots of CPU when starting so youre basically pre-computing all that and using the disk as a look-up table.


> I wonder how they implement fast loading times

Proper DX12 is still on the roadmap.

They will do it like this:

https://devblogs.microsoft.com/directx/directstorage-is-comi...


I'm still confused by the existence of APIs like this. Why would a video game suddenly need a proprietary way to access a storage device? Have hard drives & filesystems gotten that much slower over the past 20 years?


This piece from the article was interesting:

"NVMe devices are not only extremely high bandwidth SSD based devices, but they also have hardware data access pipes called NVMe queues which are particularly suited to gaming workloads. To get data off the drive, an OS submits a request to the drive and data is delivered to the app via these queues. An NVMe device can have multiple queues and each queue can contain many requests at a time. This is a perfect match to the parallel and batched nature of modern gaming workloads. The DirectStorage programming model essentially gives developers direct control over that highly optimized hardware."


That's a real thing for game consoles. The PS5 has 16GB of RAM, which is directly accessable by the CPUs, GPU, and SSD controller. So you can load an asset directly from SSD to GPU memory without a recopy. In a PC, you'd have copies from disk to disk drive cache to OS memory to user space to GPU memory. Also, in a console, where you know exactly what the hardware configuration is, you can store the assets in exactly the form the GPU wants.

This has nothing to do with "cloud", though.


I can see Microsoft making it a requirement to be logged in to a cloud service to take advantage of DirectStorage.


>An NVMe device can have multiple queues and each queue can contain many requests at a time.

To put this into perspective, whereas AHCI/SATA (NCQ and TCQ) has one command queue with a depth of 32 commands (SAS has a queue depth of 254 commands), NVMe is designed to have up to 65,535 queues with as many as 65,536 commands per queue.


> This is a perfect match to the parallel and batched nature of modern gaming workloads

This sounds like marketing speak. Some 'AAA' games now make use of parallelism at the CPU level. Almost all games today aren't just single threaded, they are laughably single threaded.

At the GPU level, rendering has been parallel to some degree since special purpose 3D accelerators showed up decades ago. More recently, arbitrary shaders have allowed some logic that was previously done of the CPU to be moved to the GPU.

Video games are not "parallel" in any sense.


> Almost all games today aren't just single threaded, they are laughably single threaded.

Many (maybe most, still) games have embarrassingly single-threaded game logic (i.e., there's still one "runloop" thread which manages game state), but almost universally at this point there's a separate thread for loading data, decompressing data, asset / script compilation, and audio. Many games also use a separate thread for physics as well, and some use worker threads for AI / rules engine NPC behavior as well.

Anyway, loading data is where the parallel and batched nature of modern gaming workloads comes in - almost all games at this point do some kind of constant background asset loading to avoid the need for load screens between areas. This asset loading is almost always done in a background thread and is pretty much a 1:1 match for an NVMe queue - request the blocks containing the data you need, ask to be flagged when a block has been DMAed into the memory area you want it in, and then decompress it in the background.


Modern games use multiple threads for more than a decade now.

Single-threaded computation is for serialized state mutations in the main game loop. Everything else, like loading data, can and does happen on other threads. The latest engines use a job dispatch system with fractured/sharded state to distribute work as much as possible across cores.

As far as this topic, loading new data is already done in parallel, and can now be further parallelized at the block level with built-in APIs with the typical OS overhead or custom virtualization layer.


> Almost all games today aren't just single threaded, they are laughably single threaded.

This clearly isn't aimed at your average Unity game. I've not seen any big AAA game in the last decade that's "laughably single threaded" when it comes to asset handling, which this API is all about.


I think this assertion is terribly outdated for 'serious' (Console/AAA) games and 'professional' game developers. I can remember as far back as 2007 interviewing for an XBox360 job, and being asked to describe in detail how I would keep all the threads busy. A professional game developer working on a serious high-detail/low-latency game would not be taken seriously if they don't know how to make a work queue.

What you say is probably true of most indie games. That's a whole different world. But the 'state of the art' is perfectly accurately described by 'batched and parallel workloads'.


A large part of loading delay, either initial load or between levels/areas, in some games is getting high-res textures and other bulk data from storage into the graphics memory. You might think “just cache it in RAM, my 6GB graphics card is far smaller than the 32GB sat on my motherboard”, but there are two issues there:

1. People who game alot or are just well off might have 32GB, but game publishers need to support much lower configs than that (the recommended minimum for GTA5 is 8GB and it technically supports 4, for the newer Cyperpunk 2077 those figures are still only 12GB and 8). Last time I saw Steam hardware survey results less than 50% of machines surveyed had 16GB or more RAM.

2. Even if we consider with 16GB to be the minimum for a serious gamer, you aren't going to use most of that for caching assets. Even if you could use 10GB worth, with a massively open play area¹ how easy is it to manage which 10GB of the assets do you load? Both the games I've mentioned weigh in at about 70GB. If the user skimped on RAM and GPU to get a nice large fast NVMe drive, being able to stream data as directly as possible between that drive and the graphics RAM is going to be an attractive optimisation, both from initial load and as new assets are needed mid-game.

The core game engine not taking full advantage of 16 cores with of CPU when they are present[2] doesn't mean that more direct transfer of data from storage to GPU, keeping CPU use to a minimum³, can't be useful. Having spare cores lying around is nice, but someone with only 4 might not have that luxury, and even if you have some cores otherwise doing nothing that doesn't mean skipping the CPU as much as possible³ can't be noticeably better than using a fast core. Even if the CPU does still have to be involved, APIs like this could massively reduce user↔kernel transitions which can be pretty expensive.

[1] this is less of an issue for games that can be easily split into more manageable chunks

[2] again, think about the larger part of the market: 4 cores is still very common

[3] let the game logic running on the CPU decide a transfer should happen, then have the transfer go directly over the bus instead of via the CPU and/or main memory at all


1. "people who are well-off might have NVMe SSDs, but most people are using much lower specs than that"

2. "even taking NVMe SSDs to be the minimum, even the people who do have them usually can't afford to allocate up to 250GB for a single game, let alone when a shitty patching system requires double that to apply a patch".

it's certainly a step forward as far as loading technology, don't get me wrong, but it's not about people with low-spec systems at all, you need NVMe as a minimum ask for this, and you need quite a bit of it.


NVMe is a cheaper upgrade than others a gamer on a budget might consider, especially if building a new machine instead of prolonging the life of a new one. Many motherboards support it a little extra cost to those that don't and the drives themselves can be little or no more expensive than SATA SSD units (quick check: known name 1TB NVME unit for inside £70, few SATA units are even that cheap, similarly similar pricing between the types at the ½TB mark too).

Not sure where you are getting 250GB for a single game from (~70 I mentioned, IIRC MSFS is ~130) but that supports my point more than counters it: if you might shoot through areas quickly and want high-res textures to be constantly available (a low fast tour of a significant area?) with the amount of RAM on the recommended minimum cards (4GB, minimum minimum being available in 2GB models) and the recommended minimum system RAM (16GB, required minimum being 8) then getting data from disk to the GPU as quickly as possible might be more beneficial than having more RAM for cache, or more CPU cores, etc.

Yes, a well off gamer with a huge 8K screen who can afford to be scalped for a top-of-the-line GFX card is going to benefit from this, but so could many others.

The performance difference between NVMe and SATA SSDs is nothing like that seen between more traditional drives and SSDs, contrary to what much breathless marketing text will exclaim, but as there isn't much of a cost difference maybe this sort of direct transfer feature will change the value for money dynamic a bit more.


Games already use virtual file systems for storing assets to reduce load times. Otherwise they have to load a bunch of tiny files with filesystem and OS overhead for each. I'm sure some would appreciate not having to invent their own.


In that case aren't you talking about read-only asset packs? Those have been around for years and aren't very complex to implement. Big studios already have their own implementations. There are plenty of "free as in beer" implementations out there to use.


This is true of every aspect of game development DirectX implements an API for, but it's still popular. I don't know enough to know why, but there's probably a good reason for that.


> Big studios already have their own implementations.

Hence the API.


It skips a couple layers of abstraction to enable faster load times, the goal here is to improve over the state of the art - the promise of 'no load times' sometimes delivered by the PS5 and Xbox Series X via this same approach

Some PC ports are already getting close to this as well - elden ring's load times are very short on my PC.

It's also hard to overstate how much of an improvement it is to have an NVMe drive send data directly to the GPU instead of have to send it via the CPU. The amount of pointless work involved in the CPU hop is pretty significant.


I guess the thing that really makes NVMe -> GPU really practical is the presence of arbitrary shader code. Even if your assets are some general or optimized format, you can run an arbitrary shader that copies from one buffer to another in order to get stuff in the right format.


actually it's rather the opposite, the focus is on fixed-function decoder blocks that are present in the new consoles.


My understanding is that it's more that storage has increased in speed so much that the overheads of the privilege transitions between user and kernel space and the synchronous filesystem APIs are the limiting factors at this point.

This is why we see specialized mechanisms like io_uring in the Linux kernel.


If I understand that announcement correctly, most of the performance benefit comes from cutting down the number of kernel-to-userspace transitions, similar to userspace networking on Linux.

Another possibility is that DirectStorage requires use of raw NVMe devices or at least raw partitions to achieve top performance... basically cutting out the NTFS filesystem too from the code path. NTFS is extremely old and complex to implement, meaning that a "tiny" file system e.g. without journalling, permissions, ACLs and the likes makes more sense.


I don't know enough about Windows / NTFS to be an authority here. But from what you're saying it sounds like the block storage now massively outperforms the filesystem (NTFS). So in that aspect NTFS hasn't gotten slower, but it has failed to keep up. So cutting it out would in fact provide a massive performance boost.


It's not just the file system.

Currently, when you have a game with say 10k separate asset files, you either place 10k asset files in the file system (which is slow because NTFS) or you develop some sort of your own virtual read-only filesystem on top of that (which has been done for decades too, see e.g. the WAD format created for Doom). And there have been many implementations for these, and yet they suffer from two things: the OS filesystem cache can't know which parts of such a package file (aka the index) are relevant to always keep in memory, and the game has to copy the assets to the GPU.

The general idea, if I get it right, is that DirectStorage provides a standardized layer that:

- cuts down on filesystem-related overhead by providing its own optimized filesystem (e.g. omitting journals because the purpose of the storage is 99.99% read vs write), or even if they don't go that far and use a-blob-on-NTFS at least to cut down on fopen, fclose etc.

- provides a standard way for game developers to deal with the problem "how to package and distribute tons of tiny assets and compressing and decompressing them"

- saves context switches across the road, e.g. as mentioned eliminate fopen and fclose calls or by copying the file contents to the GPU entirely in kernel mode

Nevertheless I'm not sure what outside of copying assets to the GPU in kernel mode actually will be the benefit of DirectStorage as almost everyone these days uses one of the major engines that have all these problems dealt with for ages.


>I'm still confused by the existence of APIs like this.

They are locking down IO with trusted computing, there's been a 23+ year initiative to move to encrypted computing to take input/output control away from the user, this required the co-operation of hardware manufacturers. Windows 10 and windows 11 are the beginning of you not being able to run or play files or exe's over the next 20 years as youtube, netflix, the game industry update their software to use TPM.

This was from 2001:

https://www.theregister.com/2001/12/13/the_microsoft_secure_...

Here is a paper explaining what the future of files/broadcasts will be like:

https://web2.qatar.cmu.edu/cs/15349/dl/DRM-TC.pdf

Basically they are building a parallel mainframe inside our PC's that only youtube, netflix, the game industry and other software companies will control. They are removing ownership of our devices and they needed microsofts help to do that.

We've seen mirosoft trial bricking cracked exe's via update. Many UWP games only work on certain versions of windows.

See here (ctrf-f then select the UWP link)

https://old.reddit.com/r/CrackWatch/comments/p9ak4n/crack_wa...

They are bringing console lockdown to the PC that is why windows 10 had forced updates. That is why windows 11 was also pushing forced internet connection hard for home users.


It is equally embarrassing to load over cloud. All it takes away is the compute/GPU resource requirements locally.

Controls are also awful right now as you're stuck using the Xbox controller - other Xbox compatible flight controllers, as well as the keyboard and mouse, aren't available yet in MSFS over xCloud. It's not (personally) enjoyable.


When I tried it a few days ago, the loading times to open the game were fairly bad. Couldn't actually see how long it would take to load a flight, because the UI was borked (or non intuitive enough that I couldn't figure out how to select a departure airport)


I'm wondering if you can serve multiple clients with one fully loaded server that has a huge amount of RAMs.


Is it multi-player? Is it waiting for other users with slower connects/machines?


It has multiplayer features, but that's certainly not the case here. Switching off all the online features doesn't change the loading time at all. It's just a lot of potential to make it faster: When you start FS on my machine it takes 2 minutes or so with only brief moments of CPU usage exceeding 2 of the available 24 cores. All while both GPU and IO are basically idle. I'm not sure what's going on. Wouldn't be surprised if there's a "GTA online" type of lazyness happening somewhere (see https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times...)


It feels like this was part of what Stadia promised, but never came to fruition.


Stadia's problem was never technical, it was entirely due to bad management. The fact that they got games to play reasonably well at reasonably high resolutions is an impressive achievement, and it set the foundation to build a dominant gaming platform appealing to everyone. Then Stadia's management figured out a way to emphasize all of Stadia's weaknesses and play down its advantages and price it in such a way to appeal to nobody.

Stadia's failure to make an impact in the market despite its technical achievements brings to mind the proverb: An army of sheep led by a lion is better than an army of lions led by a sheep.


Stadia actually worked pretty well the first time I tried it an I played a few games no issues, but the first time I tried Xbox Cloud Gaming, I got display artifacts and my computer locked up. The Windows Store Apps and especially the Xbox app sucks compared to steam and have very little going for them.


When I first read this and a few replies it made me think that Stadia had been shutdown, but a quick Google search shows this is not the case. Am I missing something and perhaps it’s planed to be shut down? (Or is it just not as successful as was anticipated)


I follow Stadia a bit, will try to answer. First understand, Google will never tell you the truth. Even if they were going to shut it down in 10 minutes, their last email to you would be to tell you it's not going anywhere.

Stadia shut down its own games division - the group who was working on in house games. Then, Stadia said it's going to focus mostly on b2b stuff - basically whitelabeling Stadia to others.

They've not said they'll shut it down, but the writing is on the wall.


>I follow Stadia a bit, will try to answer. First understand, Google will never tell you the truth. Even if they were going to shut it down in 10 minutes, their last email to you would be to tell you it's not going anywhere.

Oof this rings so true. I hear it SO OFTEN about Google, and I can't imagine that this kind of reputation doesn't hurt their ability to get folks to choose to build services/buy-in to their offerings. Are they just at a scale such that they genuinely don't care about concerns like this?


I honestly don't think it's purposeful dishonesty, just complete disconnection between groups and/or employees and management. I've always said it feels like Google is a company where the left hand doesn't know what the right is doing. So the Stadia team probably has zero indication their product is dead until the day of.


Apparently it has apparently been “demoted” within Google. The rumour also says it's getting renamed to Google Stream and will be licensed out to other companies to build their own game streaming platforms. I don't think any of this has been confirmed, so take it with a pinch of salt


Wasn't crackdown 3 supposed to have this tech but it got pulled at the last minute? There they were supposed to render the really complicated physics like when a building or similar collapses in the cloud. Thought it was a great idea at the time.


Sort of, the article is talking about two different things. One is using the cloud for data storage (because the full Flight Simulator map is huge, streaming off the cloud is basically essentially), the other is your standard Stadia-style play the whole game via the cloud on any device.

The Crackdown 3 stuff never really got beyond demo stage iirc. In fact Microsoft promised all kinds of Xbox One games that would interact with Azure, but nothing really came of it: https://www.eurogamer.net/articles/digitalfoundry-2019-crack...

Comparing with games like Red Faction Guerrilla, you have to wonder if the cloud is even necessary for physics simulation.


Oh good, gives game devs a perfect excuse to start charging a recurring fee for what's an infrastructure detail on their side. Gotta get those sweet sweet subscription fees.


This is undoubtedly one of the primary motivations underpinning investment in this area, and makes me quite wary of anything coming out of it. Is any of this "cloud gaming" being backed by a love of gaming itself? Doesn't feel like it.


I mean for Flight Simulator, it's several hundreds of TB of data stored on Azure servers for all the maps services - I do see the appeal, and it looks like MS is not charging past the initial fee. But this seems like a very fringe use-case, I don't see a lot of other applicable ones.


I've of the opinion that this is the future of gaming. Hard stuff will be done on cloud GPUs.


I’d be more interested in a mesh network where people can reliably share their compute when not being used but play their own games.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: