There's the obvious DRM win. Accessibility will be higher with way lower or non-existent hardware costs. These probably look very enticing on a cooperate slide deck. Not to mention the subscription model.
On a more technical side game streaming is only really desirable for games that don't run well on commodity hardware (see AAA 3D action titles). Latency is very important for these titles and the general rule of thumb is that 50ms of input latency is readily noticeable to humans. Average home connections have ~20-30ms round trip latencies. These speeds are not a reality everywhere or consistent. Combined with hardware input, world simulation, and rendering latencies it can be difficult to get latencies low enough to be comfortable.
Modern games just have the client run its own world simulation and rectify with the server later to hide the network latency. This strategy won't be possible with streaming. Is there some alternate optimization that can be made for streaming? The architecture will definitely be simpler with the main-frame paradigm. Maybe if there's only one client it could be feasible to send some batteries included chunk of frames that can be easily constructed based on the next set of inputs. Is there any hint that progress can be made in this direction?
If the latency problem can't be solved, I'm definitely bearish or game streaming. Latency insensitive games for the most part are not difficult to run on commodity hardware.
"Testing has found that overall "input lag" (from controller input to display response) times of approximately 200 ms are distracting to the user. It also appears that (excluding the monitor/television display lag) 133 ms is an average response time and the most sensitive games (fighting games, first person shooters and rhythm games) achieve response times of 67 ms (excluding display lag)."
These numbers are from the PS3/XBox 360 area, which certainly sold massively well. Since display lag is not included in these 67ms, this is what these streaming services would have to aim for. It doesn't sound completely insane given the 30ms round trip latencies of home connections.
On a more subjective note, I think it's going to be really hard hunting down high end gamer PCs. On the other hand lots of games don't run particularly smoothly on consoles (especially "normal" PS4 and XBox One, rather than PS4 Pro or XBox One S) that these services are directly aiming at. I can perfectly imagine a stadia version of The Witcher 3 running circles around the PS4 version that lags so much... But I don't think it would hold against the PC version.
But I don't game much at all anymore, so my knowledge is mostly outdated. The last games I played were 80 days and A Dark Room on my iPhone (both pretty good games). So I guess I'm now more in the target demographics of Apple Arcade.
Personally I noticed a significant improvement switching to a 165hz monitor in basically every first person game. The only difference here is going from 16ms to 6ms frame granularity (10ms off the worst case). I’m not sure how much is visual smoothness or latency, but I’d gladly take refresh rate/fps over UHD.
I’m definitely in the enthusiast group though. I didn’t even realize that console players dealt with such high latency. It will be very interesting to see how the market plays out.
I am not totally convinced that a high refresh rate monitor allows human vision to respond faster. I would rather suspect that most game loops are VSync-locked and a higher refresh rate leads to better input sampling and processing.
Maybe I should set up a simple test where I show a non-interactive sequence at various framerates and ask users to identify the highest framerate. And then repeat with interactive camera controls.
Have you ever watched a high-level FPS player? They can aim and shoot in the span of about ~1/5 of a second, occasionally even faster. That's 200ms. At 25hz, that's 40ms. A big part of it is going to be increased input processing, but a lot of it is muscle memory at that skill level. At 165hz, that's 6ms. With two equally skilled players, one will see the other up to a whole 34 ms sooner, in a process that takes about 200ms total. That's going to show up as a highly statistically significant advantage. Of course, the increased sense of connectedness to the game at higher rates is another big advantage, but these effects combine. Pit one skilled shooter against another at equal refresh rates, but with one dealing with 34ms of additional input latency, and they're not going to play as well as they should.
Competitive players explicitly disable vsync, because the higher the game framerate, the faster it processes input. Also, the sooner you see updated information on the screen mid-refresh, but that doesn't help as much as a wholly higher refresh rate.
All of this only matters if cloud gamers were playing against locally-rendered players in the same match though.
This is not reliable at all. You can really feel 100ms latency.
That's why in the world of pro music, barely anything above 10ms keypress-to-sound latency is deemed acceptable.
Quake III will make all these streamed games feel slow.
Quake III makes all modern games feel slow. But that hasn't had the slightest effect on the typical gamers mindset. If only certain type of games work with cloud gaming then AAA publishers will only make those type of games. Exactly like the case with EA and microtransactions.
I can see this working very well for games like Total War where hardware specs are super high but response time is not critical
However, with these streaming systems I’m not sure developers will be able to use tricks like that. It seems like these tech stacks are running all code remotely. I feel pretty skeptical about whether they can make the game feel good enough this way.
However that's not most of the market. The appeal of cloud gaming is for single player non-competitive games. These are the most pirated games and the games where input lag matters the least.
Sure you won't get high end PC gamers, that's not really the target demographic here. It's the millions of console and mobile gamers.
Moreover I was able to play a AAA title on my 2011 Macbook Air and that was fantastic.
This is actually already done in many AAA titles. The idea is to basically always be showing a prediction of what will have happened in ~70 ms, and update your state model accordingly. Given a datacenter to run the games on and all of the announcements about being able to take a snapshot of the state, this doesn't sound super far-fetched.
On a streamed game, all interactions need a round trip before the user sees any response.
Not to mention that the game will have to be written especially to allow for the simulation to be rolled back and forth. For example a misprediction triggers a sword swing it turns out you didn’t make that traps the player in a state for a second. You end up giving the player a horrible experience where the game seemingly has a mind of its own. Or you can roll back and interpolate from the bad state back to a good one which would be ideal. Then the middle ground is to rollback and have a cut in the action back from misprediction. Then you realise this is happening every second or two.
It's hugely dependent on particular network conditions. I wonder if SpaceX Starlink might have an option to optimize for latency?
This is why I'm suggesting you try it. Because to be more blunt, you do not know what you are talking about.
It's a different story on a pc with a decent gaming monitor though.
If you have a 100ms latency to a stadia server, then there's a 200ms latency between pushing forward on the stick and your character moving forward. This is not the case on a networked game, the client would start moving your player immediately (even if there is a 100ms delay to send those updates to the server).
It's also easy to port edge-rendered games to many platforms. It makes sense for MS to ally with Sony and shift their effort to cloud before both the Xbox and PlayStation are little more than rich web clients.
CDNs are deploying compute/lambda at the edge capabilities, so "gaming at the edge" could be seen as a reasonable expansion of that.
There will have to be local rendering for nearby objects and particles, but this is feasible.
DRM isn't totally likely, REnouveau does exist, and basic reverse engineering tools could be created to document the most common game engine functions.
Twitch shooters will never be replaced by this though.
However much lower latency to these services is definitely viable in a lot of places as long as they put enough servers spread out across the country.
In the USA at least there's going to be a lot of areas where the available options for internet are just not good enough though.
I think it's actually more or less the opposite?
Twitch platformers like Celeste and Cuphead can't afford input lag, as they require very quick and precise movements from players. It so happens that these types of games almost always have low system requirements. Rhythm games and fighting games tend to be similarly lightweight. Street Fighter 5 won't run on a potato, but the hardware required is far from state-of-the-art.
By contrast, Assassin's Creed Odyssey and Red Dead Redemption 2 already contain large amounts of input lag when played locally according to Digital Foundry, and consumers don't seem to care much. This isn't to say that adding more latency is a good idea—and I have no idea if the existing latency could be whittled down to compensate for streaming—but it clearly doesn't bother people much.
I'm working on a system that unifies single player and multi-player development. Think of it like zeit.co's "NodeJs Now" but for games, with a local-only configuration option. The way my system relates with streaming, is that the client is scaled way down in complexity. Basically, the sync protocol is a stream of positional updates from the server. The client becomes little more than a screen entity display. Then, the developer has the option to add a bit of "own world simulation" to further hide latency. (Or, in the case of the game I've implemented, just "own entity simulation" is enough.)
If you look at things like the Steam hardware survey, it becomes quickly apparent that the hardcore gamer (e.g. someone who has a high refresh rate display or similar and cares about this) appears likely a very small part of the overall gaming landscape, which is a very broad church now.
I think game streaming will ultimately win out hugely - “ordinary” (for lack of better term) gamers simply don’t care about the technical disadvantages. Huge numbers of console gamers already often add 100ms or more lag via their TV’s slow image processing and don’t even notice or care. I don’t think the average gamer is as latency sensitive as many readers here might be.
Most people are going to see that they can have a high end console/PC-like experience for a fraction of the cost of owning a high end console/PC, for the vast majority that is surely an appealing prospect. Finally, John Carmack himself believes the latency issue can be mitigated well enough for most things - that’s good enough for me ;)
My only real concern in all this is the future of video game mods. This is a pretty big thing now and hard to imagine this surviving well in a streaming based future. This would be a loss, creatively and functionally, for some people.
At least in the UK a typical home broadband latency is 6 to 9ms in my experience.
How do they do it? By defining high speed as dial-up speeds in the ordinance they helped the local small, technologically inept government craft!
God Bless The USA
> Latency insensitive games for the most part are not difficult to run on commodity hardware.
Majority of people don't have a PC with a dedicated GPU. Or PC at all. Thanks to the Smartphone.
That said, I don't like the current trend and I hope they failed on streaming video game.
Problem is, smartphones still are shit for input (lol touchscreen) in any precise manners so what point is there in trying to run intensive games on it via streaming.
When OnLive was operational I had no trouble playing Batman (something something Arkham?) on their American servers from a MacBook Air in Sweden. I had a similar experience later with LiquidSky which was running regular games downloaded from Steam in their cloud and streaming video to my laptop.
If they do cooperate on creating a similar service, this will definitely be a net win for the gamers.
However, if that's what happens, and if the service is successful, we might end up seeing less games tailored to run on the console's hardware with low latencies, and we'll end up having cloud-optimised games (much like we have mobile-optimised games now). But this is pure speculation at this point.
Still I have a hard time imagining first person games, since I despise those with significant input lag. Even just on a local PC some games have a horrible delay unless you run very high frame-rates (quite a bunch of those Unreal Engine) and I always have a hard time getting into those.
Now with an additional 20-30ms network latency... for this to be close to acceptable, they would have to get the on server delay close to nothing at the very least. I doubt that will happen while games are still release on multiple platforms.
Your first point is key though. Piracy is a big deal. People can say until they're blue in the face that it "helps", or that "they wouldn't have bought the game anyway", and for some unknown percentage its true, and for some other unknown percentage its a straight lie.
If they go from "optionally play a game via streaming", to "this game MUST be played via streaming" eventually, they get the second group (the ones who WOULD pay if they couldn't get it for free) instantly.
Only if it doesn't suck, obviously.
People with sensitive hearing can tell the difference if a sound arrives <1ms too late and the sound system needs adjustment.
All my bets are on bubble on this one. Maybe it could be resurrected as a zombie to offer sandboxed streaming for all those crappy mobile apps.
And when you do, you will find out that there isn’t any actual issue. I’ve played games like The Witcher 3 via streaming over three years ago.
Input/output lag simply isn’t an issue, and I’m your average German VDSL user, nothing special.
From my point of view, there is no sense in any further discussion - the empirical data is already in.
It's like saying "Switching from a car to a bike just isn't an issue - I once rode all the way to my neighbour's house!"
Try playing a multiplayer first person shooter on the same service, you'll find latency is an issue.
By all accounts, the Doom demo that Google gave was plagued with lag.
The bigger limiting factor I think will be bandwidth, especially in America with the awful internet provider situation.
This would only be relevant if you had a direct fibre link in a straight line from your house to the service. It's packet switching\routing etc. that introduce latency. And even if you DID have direct fibre, the worst case scenario for distance - server on the other side of the world, but with no other latency accounted for - is about 200ms, even half of which would be noticeable in many games.
I'm not saying it's impossible to do game streaming, but you can't just divide the distance by the speed of light and say it's low. That's not really the issue.
Only when network is underutilized, and there is no throttling/bufferbloat.
In practice, it’s a lot worse than 200ms. 500-700ms are typical for servers on the other side of the planet.
Average Pings from Digital Ocean regions:
As most things go, YMMV.
The problem with latency is that it adds up, lag from the server, lag from the TV, lag from my brain, lag to my fingers, lag from the controller to the machine, lag back to the server, lag in processing and then start again.
Client-Server lag might be acceptable in isolation but not as part of a system.
So not really anyone. And that's in an extremely latency friendly environment where the servers are just up the road.
I now play games with some friends and while I'm using a local PC, they use Shadow PCs (ie a VM with a GPU in a datacentre vaguely near them) and it's fine.
Maybe it'd be worse on an LTE/5G connection, but my phone is getting 25ms to the nearest Speedtest.net instance right now, which doesn't seem substantially terrible.
I would be much more concerned about the pricing of all of these things, than the technical issues. How many gaming subscription services am I going to need to add to my video subscription services?
From a hard theory network perspective, having higher bandwidth, lower latency, lower error rate on transmission path combined with closer geographical proximity to servers seem to help mitigate increases in latency.
However, if you argue the problem is solved by your empirical evidence gained by playing Witcher 3 of all games, I disagree with your point. Play a real time strategy game, a MOBA, or a first person shooter. 70ms~ (a report for Stadia reported 70ms-130ms, may be stale data) of lag is noticeable to people. On the other hand, I don't mind streaming a turn based game.
Certain games like rhythm games would also be pretty miserable with jitter...
Back in the day, if you could maintain the data, my 15 physical 3.5 inch rather unfloppy floppies were monkey island. I lost my personal copy, but if I moved the data, I could have my own copy of monkey island right here. I personally failed on my own maintenance there.
Then we had - skipping some steps - something like private WoW Servers. they worked 90% of the way. But Blizzard kept some code and that never worked privately. Most stuff worked on the client side, most stuff on the server side could be reverse engineered.
And now we're planning to stream games via something similar to team viewer / citrix, put bluntly. At this point, what difference is there to an interactive movie? I can replay several single player games I acquired on steam off-line as long as I have the binary and a windows instance. I'm sure I can play FTL or slay the spire in 2040 without any developer interference.
If you stream those games, those games will die harder than the multitude of multiplayer only games with dead servers once the streaming servers get shutdown due to business decisions. Which can and will happen < 1 year after a failed launch in my book.
I don’t want to give that much control over to someone else. I don’t need all-you-can-eat content. Just let me pay for the music/movie/book/game that I want and enjoy it for what it is, and know I always can just use my copy without needing to be plugged into the internet.
I guess this view probably makes me look like an old man yelling at a cloud, but I’ve managed to find a way to survive without signing up for Netflix, and I hope I can still play games in the future without needing to sign up for a streaming service.
At this point, I trust my backup 2x SSD mirror more than some cloud. So I guess we're two old guys yelling at clouds.
And this will just go even more fucked once the article 13 of the EU goes weaponized in 2 years.
What worries me the most about this trend is what impact it will have on the computers we use.
Today you can buy a powerful laptop or desktop. You can write software on it, for it, for any purpose at any time. You can distribute that software to others freely, and you can permit others to run your software for any purpose under permissive terms of your liking that still protect the rights you have over your intellectual property, the software that you wrote.
I might be falling trap to the “slippery slope” fallacy here, but the way I see it, once the majority of new games, movies, music and other entertainment is all accessed via streaming, and all of the big players in software move the applications that are used in business and in education to the cloud, the computers we use may easily:
1) Become too weak to do intensive work locally, because everyone are using them as thin clients only, for interacting with software that is run in the cloud.
2) Become so locked down that we lose the ability to distribute software outside of cloud delivery mechanisms controlled by corporate entities.
I am not anti-corporate. But we cannot allow the corporations to be in control over the software that we use, in this way. Not because they are corporations or because they are capitalists or anything that. The same would be true of any entity, whether that entity be a business, a state, a body of the government, a non-profit organization or even just a person.
When we allow someone else to gatekeep access to the platforms we use, we are handing our freedoms over to them.
I use an iPhone that is running iOS. I can afford to allow my phone to be this restricted because I have other computing devices that allow me to develop software for them on them, and to run software written by others on the operating systems that those other devices are running.
But if none of my devices allowed me to do this, and no devices on the market allowed me to do it either, then where would that leave me?
And where would it leave us? The humanity.
In my opinion, the ability to develop software freely, to distribute that software to others freely, and for them to be able to run that software freely, is so essential to our society and to the future, that if we lose this then we need to take drastic measures to regain the control that we lost as quickly as possible. We would need to crowdfund the development of computers that allow us to do what these future locked down computers don’t. And we would need to ensure that these freedom-friendly computers become used by so many people that we could continue to develop these machines and be able to continue to produce them, at a reasonable price. So that the ability to develop and run any software for any purpose will always be possible.
>At this point, what difference is there to an interactive movie?
What is an interactive movie and how does having a game stream from a remote server change a video game into one of these things?
Now, I could replay those games after some time. That improves my overall cost-benefit ratio from 60 bucks -> 2 hours of gameplay to 60 bucks -> 4 hours of gameplay. Or, if the game is good, even more. I have a bunch of adventure games I paid 10 bucks for, and I have replayed them a lot and I will replay them a lot.
However, if you have a AAA studio, they will consider a game a failure shortly after release. You get one play-through for 60$ and that's all you get, because then they shutdown the servers for their next thing. And of course they will "charge less" and "run updates" and "keep expansions going" and such so it's less obvious. I'm jaded at this point.
Or someone has modded it, and you can play it again with different content, or altered content, or altered gameplay, or some combination thereof. And if it's owned and local, there's nothing they can do to really stop someone from doing that definitively. The first mods for games weren't because developers decided they wanted people to be able to mod the games, they were from fans diving in, reverse engineering and making changes. Developers embracing modding came later.
With a cloud based game or digitally verified synced content, some level of consumer control is definitely lost, and that's a shame.
Also, games like GTA V have been supported for a crazy amount of time. Transitioned into online play. People have probably gotten 1000's of hours of gameplay from a $60 purchase. I play things like fallout and minecraft so I also get a huge gameplay time to cost ratio, so I'm not very sensitive to this from my personal experience.
The same thing happened with movies, with some people wanting to hoard boxes and boxes of dvd, vhs or blurays, thinking that they are somewhat smarter than people who just pay some subscription (or digital rental) to stream whatever movies they want to watch when they want it, on the device they want to watch it on.
They will once it becomes normalized. DRM for software has long been far more odious than seen in the movie industry.
Unfortunately, the general public wants convenience over these benefits. Instant gratification.
I wish I was kidding, but it's right there in the press release minimaxir linked
And with their GPU's going into datacenters and a move away from CUDA lock-in to OpenCL (example: https://www.openwall.com/lists/announce/2019/05/14/1 ), they're in a great position to increase market share of GPU accelerated tasks like AI and deep learning.
I agree about Stadia in general. I wouldn't want it to become too dominant, especially since it's like DRM on steroids - you can't back up anything from there. In general, I don't think regular gaming stores that sell rather than rent games are going anywhere. I.e. GOG, Steam and others are going to stay around. As long as Stadia won't be pushing exclusives, things will be good, since some will always prefer to run games locally.
Since I assume Sony & Microsoft know this, and they aren't countering with a VR push but instead trying to match them with streaming, I suspect they judge the market either isn't there yet, or may never be there (on any business-realistic time scale). Or they just can't resist the DRM and lockin.
All one needs is a screen with support (e.g. smartTV, chromecast, steam box), internet, and controllers. Classic game systems might be needed for another 5-10 years. Controllers & form factors matter (e.g. Switch, VR), but the "power" of PCs & game systems are no longer differentiators.
Not really sure the Sony and MS thing are a reaction, and I'm not sure Google's thing is so much a thing.
Some of Google's PR about how whatever it is they're doing is going to be all things (their statements have been strange IMO) could just be unfocused PR, or just an unfocused product.
(Technically, it is also available on Nintendo Switch, but there's also an arguably better alternative there.)
If MS and Sony wanted, they could have Vulkan driver for Xbox and PlayStation tomorrow, since they are all using AMD GPUs. And no one stopped Apple from supporting Vulkan on their systems either. They don't, since they are also lock-in jerks, which is exactly the point I was making above.
That is not really an accurate statement. Graphics performance is heavily dependent on the actual hardware design of the underlying platform. You can't wave a magic API-wand and have your executable or shader be magically fast on every hardware platform.
That is really an imagined scenario. An extremely tiny minority of games are targeted for such a wide variety of platforms. Real-time high-performance code is always tightly coupled with the hardware - if you want to squeeze performance out of hardware, you want to minimize abstractions to such a level as they don't hurt your productivity. If you're targeting consoles, you can maybe target the PC platform. But there is no way you're targeting the nintendo DS or a smartphone with the same codebase without major modifications to the graphics code.
On the contrary, most games are targeted for most platforms, to increase reach and sales as result of it. Only some exclusives of console makers are not doing that, and those are clearly outliers.
> Real-time high-performance code is always tightly coupled with the hardware
That's not common at all. Unless you want to beat modern compilers with assembly code, you only will produce something worse. Sure, there are are rare cases when using hardware specific features yields extra performance. Codec developers do that for example. But for games? Not usually needed. Shaders are provided in cross platform fashion (such as SPIR-V) and that's compiled into GPU machine code by the driver. And actual game code is commonly using some high level systems language (C++, etc.) + a scripting one.
Good performance is achieved by parallelizing the engine properly, since modern hardware is increasingly multicore.
Microsoft and Sony want to be relevant in a mobile world too...that's what this is all about. They both were shutout of the mobile piece of the pie in gaming, and they are coming back for it. Microsoft and Sony want their gaming ecosystem on mobile devices with the ability to cross-play with wired devices powered by their SaaS models, Xbox Live and Playstation Online. That market is much bigger than the market they've already captured.
(Things like Wine are effectively sabotage.)
Valve seemed to not make the final marketing push on Steam Machines, that I could see. Maybe because they realized that it wasn't coming together. And/or maybe it was intended as a warning, to not be forced out when MS was grabbing the app store.
Your idea of Valve in making a console themselves is interesting. They did get some limited experience with hardware, with the controller and the thin device. I don't know what all would go into some kind of manufacturing and branding partnership.
One thing that I think didn't happen is an initial loss-leader on the console, to bring people into the lock-in ecosystem, like the console companies might do. I don't see how the third parties had that incentive. And I don't know how loss-leader works if whatever anyone builds is just a commodity PC.
> The inside story of how DOOM came to life on Stadia. id Software delivers a bird’s eye view of real-world Stadia development from conception to execution. Learn how high-performance games are made on Google’s new streaming platform. Recorded at GDC 2019.
So this agreement mostly focuses on Sony using Azure infrastructure for Cloud Gaming. I wonder how will Xbox will compete in this scenario, it's kind of weird hosting your competitor.
Microsoft using Mac (not even discretely).
Apple using AWS, Google Cloud, Azure
Amazon Selling competing products
This may be a bit of a specialized case, but if you run a cloud hosting provider that is not your primary (or original) business you will need to accept that likely be involved in running a competing service. (I am sure AWS hosts a fair number of retailers and related technology)
Also Sony and Microsoft have a shared interest in having viable competition in the market against Stadia in case it takes off.
Same as Apple using Samsung's tech in one way or another. Money talks after all. Different divisions, different profits, etc.
I got 200mbs download and half the upload speed, I played plenty of PS Now titles, and the only problem I have with it is the input lag. There's no escape to it. You cannot play fast-paced games with such delay, especially multiplayer-FPS games, it's just terrible. It's better to have the hardware to play than to play remotely. Unless you don't mind 1-3 second (+ additional cause you're using the internet) input lags, then game streaming is just for you
Throwing in ping times of 50-100 ms to some datacenter, not to mention whatever traffic shaping shenanigans Comcast will inevitably apply, and I don't have a fuzzy feeling about the viability of this rash of streaming services.
Compression latency should be in low milliseconds (1 ms should be plenty.) The latency over ethernet is going to be measured in hundreds of microseconds. Sending a compressed 100 kB frame over it takes 800 microseconds, which can partially overlap with compression. Noticing 2-3 milliseconds extra latency is going to be pretty hard.
Something is badly wrong with your setup or with Steam Link.
I do always use it with an xbox controller rather than mouse+keyboard though, so it's less latency sensitive.
PlayStation Now (former Gaikai) was a startup that wanted to sell game ads (try before buy), and after expanding into actual game streaming struggled to make money and attract users. It is just too expensive.
To me, the biggest difference bt movie streaming and game streaming is that gamess are much more resource hungry. Video can be watched on cheap 30 bucks usb stick, for gaming you need 400$ console.
As a Linux user, cloud gaming excites me even though I have no intentions of ever using it.
Removing the end user capital expense of a gaming rig sounds like a huge change. Would drastically reduce the cost of VR setups as well.
The trick is to render the pictures using the latest position of the headset. Include depth maps to make the rendering more accurate. Use ML to in-paint the gaps.
Also for a moving scene, you'd need to send a depth or position buffer and a velocity buffer for each frame, meaning you'd need something like twice the bandwidth for your video. Probably more, since I can't imagine how you would compress that information: any artifacts are going to give weird results in the reprojected image.
John Carmack has some good blog posts about this if you're interested in the details.
For example, consider that the PS3 Other OS mess might've been more a Japanese "we can no longer have this, and must discontinue it, for the good of the company/platform" (not that I agree that was OK), than "now is the time to screw over those other guys, on things we never intended to honor in the first place" (which has been well-known SOP of some other companies).
Two historic foes setting aside their differences to target their common enemy - The gamer.
That's what I'm thinking about right now.
Especially with how ubiquitous
Chromebooks are, and how the original beta for Stadia was given to players through Chrome, it’s pretty much a certainty.
Since Microsoft and Sony have a more vested interest in keeping their traditional consoles afloat, you might have less luck with them.
Glad that Nvidia is pivoting to moving the service back to a full computer setting. The Shield while cool, really could have done with a better screen.
I wish nVidia would bring it back, TBH.
PS Now accounts for about half of all game subscription service revenue, so half of 273 million (quarterly).
Doing the math, that leaves you with somewhere in the realm of 2.38 million active subscribers (273 million *.52 / $60).
The PS4 installed base is about 86 million consoles. That means only about 2 or 3 percent are active PS Now subscribers.
If Stadia comes out with a massive Steam or PS Store type of catalog where you can access any game, I think it'll be a runaway success.
If it's just PS Now (along with a similarly small catalog) without the need to spend as much on hardware, I don't see it being anything more than a ~2-5 million user subscription service.
It's a nice pile of revenue, but it's not "overtake the rest of the market as the preferred way to play games" type of numbers.
> The tech giant believes that game developers will no longer be limited to computing and will be able to create games with "nearly unlimited resources". 
So the paradigm shift is that as a developer, your game simply consumes the amount of cloud resources that it requires to run, and nothing more. i.e. Stardew Valley is cheaper to run ("publish") than Red Dead Redemption.
So perhaps the idea is that the game publisher has to pay for the compute time and price the game accordingly.
It's not just the people with $1k phones: it's all the students who's parents got them a MacBook Air for college, and all the kids and teenagers who's parents won't buy them a console for Christmas. Not to mention markets where gaming hardware is several times more expensive than in the US or Europe.
edit: if the experience is decent
besides it's not the "real" experience.