I could see games being able to signal "there is no input during this cutscene, so it can be buffered at the client." But for everything besides cutscenes, game streaming essentially has to work like a VoIP call (hard-realtime), doesn't it?
It happens occasionally when I drive (bad LTE zones). On bad days (ie, if I'm tethering at hotel to join a conf or using airport wifi), I sometimes hear "robot voice" as the tool attempts to deal with signal attenuation.
This may not work as well in video games, but I would like the option.
I imagine most music, and some during-gameplay dialogue, could be buffered at the client. I'd expect sound effects to be the only audio that needs to be super low latency. Maybe certain dialogue too, if the speaker is visible on screen.
This alone might alleviate a lot of the GP's problems, since most audio wouldn't cut out. That can make a huge perceptual difference.
They may already be doing something similar (halt rendering on congestion) but when this kind of lag happened to me it felt like the rendering had continued without pause on their end.
In essence, you need a fat cable to a datacenter very nearby for shooters/fast action games.
Regular wi-fi is more or less sufficient for everything less (you might get video degradation from time to time).
OnLive was the first and it was just insanely awesome. Too bad it was shattered.
I also tried LiquidSky, and it was insanely awesome. At one point you ended up having a virtual PC where you could install and play any game from your Steam library. Too bad they’ve changed the service and monetisation a few times, and it’s unclear if yhey are alive.
Haven’t tried others.
There's a lot of us out here who are curious how this thing works and the design tradeoffs.
In my experience the game would drop graphics before I would experience input lag. There were a handful of times that I did experience input lag. This was on a wired connection 100mbs down 20mbs up through Xfinity.
I did notice in the Google demo, that when the person was using the game pad, he experienced input lag when he was trying to jump up on top of that steeple on top of the building.
I'm still of the opinion that Google shot themselves in the foot here by having a bunch of wireless controllers in one room. Its like they've never talked to a Super Smash Bros Brawl tournament organizer before: Wii Nunchuks over 2.4GHz Bluetooth have dropped packets / dropped input issues when you get to ~20+ participants.
Don't do mass wireless in one room. It always ends poorly. I'd expect that local wireless in a typical living-room setting would be a better experience actually. After all: the major issue is whether or not the wired-connection / fiber backbone of the typical city is up to spec for this kind of thing. (A typical living room user probably doesn't have to worry about clogged 2.4GHz connections unless they're in an apartment I guess...)
lag is the least of your worries. Encoding artifacts will get to you and completely ruin the experience on anything that is not a simple puzzle game.
It was night and day, both in terms of latency and graphics quality. I don't have high hopes for this service.
I can't imagine trying to play something like a fighting game this way.
Anything more specific than that, i.e. lag would occur every 10 mins on avg?
> Your approximate location is determined using information from local Wi-Fi networks, and is collected by Location Services in a manner that doesn’t personally identify you.
The idea that one can retune the radio into a different channel for 100 milliseconds or so and not impact user experience is absurd.
As someone who plays through DOOM on Ultra Nightmare, this kills the game. There are times when, if I were to miss even 10 or 15 frames, there would be a good chance that will be the end of my run.
It will instead, expand the audience to a lot of casuals who just want a little demon-slaying power-fantasy on a low difficulty. Nothing wrong with that, and it is a damn big addressable audience. Just like how mobile didn't "kill" any other platform, it just added a whole lot of candy crushers.
If this were an entire room of WiFi controllers hitting the same WiFi frequencies all at the same time, it could have been a local problem.
All in all, we can't read too much into the demo-conditions. There are too many variables at play here.
No lag at all on my end. I am in the USA, so I have a solid but not amazing connection.
I should note that I have only played for half an hour because AssCreed was extremely boring. The tech itself was very solid in my experience.
I tried to catch some streaming artifacts, like parts of the screen not updating or a visible lag, but I could not see anything. In a blind test, I doubt I would have been able to differentiate this from running the game on my home console.
In any event I think that if you are a casual gamer and want to get off the graphics card upgrade train every couple of years, then this is a no brainer. It probably would not be a good match for multiplayer shooters, but for solo games, even AAA titles, it's a great option.
My feedback was that if they paired up with Steam they'd have a killer product. They might still have a killer product if they convince all those developers on Steam to also put their games on this service.
Now, as a technicality specific to ACO, the goals should still appear in the mission's description, no matter whether you hear the NPCs stating those goals or not.
Slow connection speed? Run these tests, etc. Or, switch to a faster more secure browser. Sound familiar?
Google was just today fined 1.49 billion Euros for their ad strategy - this is reality: https://www.bbc.com/news/business-47639228
Indie shops would probably struggle even more under this model.
The cost per user will vary drastically by the amount they play and how computationally demanding it is. I just don’t see it being feasible without personalized cost. Looking forward to paying minimum cost for minimum graphics too.
The biggest concern is going to be internet availability and data caps. I tried the beta for Project Stream (now Stadia) this past winter. It worked well and was impressive, but I have a good internet connection with a 1TB cap. I have friends with much worse speeds and harsher caps and I am not sure if this would be viable for them.
I am more concerned about 'physical' gaming being phased out. I doubt Stadia will do this soon, but I like building a new computer every 4 years for playing games. Maybes it's something I won't actually miss(like how I don't miss CDs for music), but it remains to be seen.
Is the grind a way to slow progression in order to make micro-payments more desirable? Fuck. That.
I really enjoyed it for the first 20 hours or so.
Also, it doesn't have that high of a playerbase.
The trap that you should avoid falling into is assuming that's the only way games can thrive. F2P games are huge and dominate the conversation, but I think indie games are the best they have ever been. My favorite game last year was Into The Breach, and by all accounts it sold well in the same market that Fortnite dominated in.
Games are evolving in weird ways, but it's in a multifacted and diverse way.
It is kind of another beast altogether for Google, Shadow, Microsoft (xCloud), etc. to dedicate actual hardware for your usage. Shadow and others like it are essentially remote VM w/ GPU that you rent by the hour. Stadia, xCloud, etc. we don't know what the business model is going to look like. The only positive here is that Microsoft, Google, and Amazon are all cloud infrastructure companies so in theory, they could price their offerings cheaper than a company that is a tenant on their systems.
Microsoft has made clear that they want Xbox to be a platform independent of the physical box they sell to people. They are 100% going in the direction of Stadia. The big question is how they are going to do it in regards to their next console.
That would be considered unfair competition and they would probably be fined by EU, I guess
The vast majority of your support department can be let go.
Time has shown that the best revenue model for these kinds of services is finding a monthly price, or multiple tiers of a monthly price, that adequately cover the costs incurred from heavy users and light users alike.
Most games would be developed for Playstation + XBox + PC + Cloud service to maximise audience (unless one cloud provider gains a monopoly, something so far nobody managed to do in either game platforms or cloud computing).
Of course you can decide to develop only for the cloud platform of your choice, but that's nothing new. Microsoft and Sony already pay you good money to make your game exclusive to their platform (if you're lucky even if you're indy).
Maybe I'm super negative here but I don't see any pros to this development. Today's games are already dumbed down and steered towards profits only. The only games I play these days are indie games made by 1 person up to a handful of people.
We really don't need more but we need better. (This applies to many other areas too)
Pay-per-hour would lead to shorter, more thoughtful games, rather than long grindy ones.
Similarly, monthly subscription where you get access to a list of games actually helps indie games. Being able to jump into an indie game you already own within seconds, compared to having to buy and install a small game you've never heard of before.
Pay-per-hour means developers have an incentive to build addictive games that keep you just engaged enough to keep playing for an extended time.
> Similarly, monthly subscription where you get access to a list of games actually helps indie games. Being able to jump into an indie game you already own within seconds, compared to having to buy and install a small game you've never heard of before.
Monthly subscriptions mean that the platform has to choose what games to include and promote based on what is most likely to make users find value in the platform—which means focussing on those with widest appeal, unless their recommender engine can get enough signal to reliably predict niche interest.
I'm not sure about that. Currently tons of people pay monthly for grindy MMO games. Expanding that sort of revenue scheme to single player games would encourage devs to create more Skinner boxes to keep players "engaged" over the long run.
Players aren't going to spend hours grinding if they know they're paying extra money for each of those hours.
Or it ties money directly to length, so you need your game to be 60+ hours to justify it's existence. Or people won't pay that much so we don't get any new dark souls or RDR2 length games.
Indie developers don't get a cut of console sales, but they do care about how many people buy/play their games, and if this widens the market it seems like it'd be a win for them to me.
That’s just speculation mind you. Things like Stardew Valley would probably do even better.
These huge companies will keep throwing money at streaming because they see gold at the end of the rainbow, but I’ve seen no indication that they have a plan without giant question marks before the “profit” step. Look at YouTube, something Google acquired in 2006, is dominant in its space, and still doesn’t make them money. Yet somehow games, which are more finicky and reliant on universally good internet connections is going to work for them?
This is trend-chasing, once again without any real new ideas to overcome existing challenges.
It's apparently deeply tied to YouTube, and particularly to help provide an onramp for more (often ad-opportunity-generating) gaming content on YouTube a service that is both an ad-platform and the venue for at least two distinct revenue generating premium services (the older of which is ad free) that Google has not strangled in the crib.
If it runs for 4 years and makes money, that's a success for Google. It doesn't need to be permanent to be useful.
So it could result in a renaissance of the classic AAA 3D single-player games, perhaps.
Also, this will fail miserably if Google thinks it can continue with its current attitude toward user support, so there is hope.
Microsoft/Sony are NOT going to sit back and let Google win here. Expect replies from them (especially with Microsoft's Azure).
The streaming services of gaming is coming.
It's already failed multiple times. OnLive tried & failed. Nvidia's has been in beta for years. Sony has one.
Everyone in this area has tried this. Nobody has seen what could be described as "success", and the cost models so far have been ludicrous. Turns out renting Xeons and Radeon MI Instincts in a professionally staffed, maintained datacenter is way, way more expensive than a handful of consumer chips in a box in the living room with nobody on-call to monitor it.
The GPU here looks to be basically a slightly cut-down AMD MI25. That'd make a single GPU in this stradia cloud gaming service costs more than 10 xbox one x's. How do you make that price-competitive here?
I'm sure they got bulk/promotional pricing from AMD, plus they're very good at both running hardware with low overhead and packing it efficiently.
You can't really pack the hardware here since it's latency sensitive. It's straight dedicated resources to an array of VMs. Dedicated CPU cache, even, hence the odd 9.5MB L2+L3 number.
Bulk pricing only gets you so far here. You're still talking gear that's categorically way more expensive than similar performance consumer parts. Not to mention all the other costs in play - data center, power, IT staff, etc...
Making this price-competitive is a big problem
The other costs (power, people, etc.) are amortized over Google's array of services.
Last but not least, it would be very dumb of them not to run batch workloads on these machines when the gaming service is idle. I bet $1000 these puppies run under Borg.
Power doesn't really amortize, and neither does heat.
And capacity still had to increase for this. They didn't just find random GPUs under the table they forgot about, and now that they have a massive fleet of GPUs it's not suddenly going to start handling dremel queries.
This all still costs money. A shitload of it. Someone is going to pay that bill. More ads in YouTube won't really fund gaming sessions. So will this be ad breaks in the game? No way that's cost-effective for the resources used. Straight-subscription model? This seems most likely, but how much and how will you get people to pay for it?
I know from experience that Google is very cheap. You tell Urs you saved a million dollars and he'll ask you why you didn't save two. Or five.
If this takes off, the pricing of the service will pay for the hardware (assuming they did a reasonable job there of baking it in). Even if it doesn't, organic growth from other, much larger Google services can make use of the idle hardware.
For the record, I was involved in a couple of projects that required a lot of new hardware. One of them even ended up saving the company a lot of money in a very lucky, definitely unintended way.
This strikes me as rather amusing. Google was having such trouble getting their hands on enough GPUs that they decided to build custom hardware accelerators (TPUs) to fill the gaps.
I'm sure they'll find a use for these.
HBM2 memory is super expensive. Like, rumors are 16GB of HBM2 is $320 expensive. Toss in anything custom here and there's zero chance this is under $600/GPU.
Even in the hotly contested consumer market the 16gb HBM2 Radeon VII is $700. And that doesn't have any high speed interconnects to allow for sharing memory with CPU or multi-gpu.
Could they be using these GPUs for other purposes during idle times (AI model training, cloud GPU instances)?
There's also an old paper by Murray Stokely and co. about the fake market that was created to make the most use of all hardware planetwide.
1. As subscription services take over, the upfront revenue game studios see will drop. This is just simple math: Xbox Game Pass costs $10/month, which means its the total cost of a AAA game over 6 months. In a traditional model, many gamers would expect to buy, lets say, 2 AAA games per year. In this new model, I can play as many as I want. And even if I only play 1 or 2 every year, I'm almost definitely going to be "dabbling" in the collection for other games I may want to play, ESPECIALLY if they're instant-on like Stadia. Even if they pay-out to studios based on some metric derived from time spent in game, there's no way studios will get the same level of income as they did before. (note: this is exactly why Spotify is having such a hard time, and why they're branching out beyond music. royalties abstracted behind a subscription service suck for the bottom line)
2. So upfront revenue drops. How do studios make that up? In-game transactions. They're already huge, and they'll just keep getting bigger.
3. So what, micro transactions (mtx) are the "new normal". Well, the top 5% of games can afford that decreased upfront revenue by making it up in mtx (think: Fortnite, Apex Legends, CoD). The trailing 95% can't (think: Indie titles).
4. Beyond that, you can bet your bottom dollar that Stadia will pay out tons to the AAA studios just to get their names on the platform, given that Google has no first party studios to speak of. Assassins Creed gets enough upfront revenue to make it worth their while, meanwhile the next indie darling is left out to dry, further balkanizing the gaming industry
5. Switching gears: A massive number of software engineers in the industry entered it because of gaming. Games, even back in the 90s, were such a clear application and value of computers that it was obvious, even to children, that they'd be something huge. It inspired a generation, to not only play, but to mod and even make their own. Now, we're moving that all off into the cloud, hidden from the next generation. Google wants you to own a Chromebook and consume their products, not understand how they work.
6. Speaking of modding: Its literally the source of the world's most popular games. Battle Royale? You can trace its roots back to mods for ARMA and Minecraft. MOBA? Dota, a mod for warcraft. Creativity happens in environments that large corporations can't recreate, and traditionally a great platform has been starting with a base game, a great game, that some studio created, then exerting your creativity on top of that platform. It benefited everyone, including large AAA studios who could then copy your idea and make millions. Yeah, good luck modding on a blackboxed server a hundred miles away.
7. But fine. I guess we're moving into the future and this is part of it. Except, there are millions, even BILLIONS, of people around the world without the internet capability to even join this service. Google tried to help solve this with Fiber, and gave up. Its fucking hard. They'd rather do easy, cool things, like cloud gaming. Modern consoles are bad enough; my brother, who lives just an hour outside of a top 10 US city, recently told me that he downloaded Fortnite on Xbox for the kids. It took a week of 24/7 downloading. Most games drop with day 1 patches in the dozens of gigabytes, even if you buy the disk in-stores. The sheer arrogance of Google, to get up on stage and claim this service is gaming for EVERYONE, the apex of accessibility, is disgusting to me. They're stuck so far up their own ass they've become the ouroboros.
8. Well, streaming games have taken over the world. Let's say you want to compete with Google on this streaming game front. All of the top three cloud providers now want to get into game streaming. So, no way can you compete on cost there with them; they own the data centers and give their game streaming divisions nice fat discounts. They all have the pockets to design nice custom silicon with AMD specialized for the task. And, oh by the way, all the latest games are now optimized for this silicon (whether its the custom AMD chips in the PS4, or XB1, or whatever cloud streaming service we're talking about, they're all custom). You're stuck with off-the-shelf cards. Ha! Nvidia won't let you deploy their cards in a datacenter , because they ALSO want in on this big cash pile they've all convinced themselves exist. So, basically, good luck. The world has balkanized, and penetrating it becomes harder every year.
I hate this. I hate it so much. The only saving grace is that it is inevitable that this will fail to realize the results Google wants, and they'll pull the plug. And maybe the rest of the industry is smart enough to recognize how short-sighted a streaming-first/subscription-first strategy is, for literally everyone involved except the people who rent the metal.
I want to go back to the 2000s. This new world sucks.
Google Cloud has regional US DCs in western Iowa and central South Carolina. A midpoint between those two locations roughly lands on Nashville TN, which is ~600 miles away from either. Light could make a roundtrip of that distance in 6ms. Of course, the internet doesn't allow for latency at the speed of light, but that's the physical limit, and that's plenty; a typical internet browser alone has input lag of 10ms . In order to achieve 60fps, frames have 16ms to be rendered.
But the regional DC is only the worst-case, because they've said they're deploying these things in 7500 locations around the world. That's unprecedented scale for a tier 1 cloud provider at the edge. They know that they have to be close to consumer populations.
Also consider this: Once cloud streaming takes off, we're going to see deeper integration into the frameworks and game engines themselves. Imagine a game engine which is built for streaming. It could do input prediction, doing a "light rendering pass" of frames for N possible inputs the input buffer might receive on the next frame, before it receives them. These custom chips they use have plenty of headroom to do this at 1080p, and most controllers have, what, 12 buttons + all of the joystick states? Depending on the game this might be possible (example, hard to do in multiplayer). Combine that with the natural advantage a cloud-hosted multiplayer game would have in networking with other clients to resolve game-state, and you can see that its not just a strict downgrade; it might be possible that we'll see improvements in the performance of games beyond just the typical "new year better graphics" cycle.
Reminds me a bit of the 30/60 fps fights a few years ago. Sure, 30fps games look more "cinematic" and 60fps movies look uncanny, but 30fps games feel less responsive.
Also, to point 7. guess what billions of people can't afford a console or an expensive PC rig. Yet, in developing countries data is already very cheap if not free. So, this DEFINITELY is a big step closer to unlocking games for them.
By comparison, shipping "edge devices" (aka, uh, COMPUTERS) running an 845 Snapdragon or Tegra (like the Switch) is cheap and getting cheaper. What makes more sense: asking someone in a developing country to pay $200 one time for a general purpose computer useful for everything including 1080p gaming, or $10/month in addition to, uh guess what, some computing device they'd already have to own to access Stadia.
The adage goes: never underestimate the bandwidth of a station wagon full of tapes barrelling down the highway. I mean, the follow up is usually "but never forget the latency", though in this case that doesn't apply. Point being: the internet isn't the answer to everything, but if the only tool Google has ever known is a hammer then every problem is going to look like a nail.
To that last point; it is impossible to understate how fundamentally important Nvidia's Pascal architecture has been to the development of both gaming and AI. In my mind, its the most important computing chipset invented in the 2010s, and belongs among the "world's greatest" chips next to the Intel Core architecture, Apple's A-series, Pentium, and the 8086. It put Nvidia, quite literally, 5 years ahead of the competition almost overnight; AMD is still catching up, three years later, to the perf-per-dollar and perf-per-watt of the GTX 1080.
That chipset, and the cards that were made with it (GTX 1080/1080Ti namely) were the first indication that DC rendering with a stream-to-client architecture was actually possible for video gaming. Before that it was hard to make an economic case for it.
OnLive was purchased by Sony, and its easy to conclude that they repurposed their tech for their Playstation Now service. So it lives on.
Not sure how would that affect game developers though as there won't be any attractive way to programming for young people; before, making an own game was quite an attraction to jump into programming and a motivation to study hard.
This is going to be an alternative for some types of games.
And quite on the contrary I think it would awesome for indies, since they tends to be pretty short. And it is much easier for them to gain consumers, if they went viral over streaming, many of which are, like horror games.
No, they didn't, and every single thing you've suggested about their gaming platform is also true about every other streaming platform, and yet none, literally none, of those platforms have tried any of the things you're complaining about.
You basically just made up a bunch of things to complain about because nothing in the actual release was objectionable. Yours is not a helpful way to react, my friend, though it is a popular one.
I’m open to debate about how this could be priced, but I’m pretty comfortable pointing to existing cloud computing business models or streaming services as a precedent.
I would invite you to come up with an alternative business model for serving people who like to play high end graphics games for many hours a day.
Why are you this comfortable? Netflix, YouTube TV, Twitch, Sling, Amazon Prime Video -- basically all streaming services offer flat rates, not per-MB ones.
Further, all existing game library services tout unlimited gaming as a primary selling point! That's the primary reason you opt into Gamefly or Nvidia Shield, at least according to their own marketing.
And this offering is not for high-end gamers. It's taking the benefits high-end gamers get for their investment into their hardware, and making it available to the millions of more casual gamers. This isn't for high-end gamers, so creating a business model for them using Stadia makes no sense.
Finally, you're not thinking of this at the right layer if you're thinking in terms of things like s3, ec2, lambda, etc.. This is the product that's built on top of those, and the single price problem has been present for hundreds of years. It's a solved one, just ask any current MMO or hell, any clothing manufacturer. You're basically saying that an XL t-shirt is going to cost the same as a S t-shirt, despite tens of thousands of examples to the contrary.
YouTube (and Google Play TV & Movies, which appears to carry the same for-sale/rent content in a different storefront) and Amazon Video also both offer purchase of individual content items as well as a common flat rate subscription to certain content.
I don't agree. Anything is scalable.
This does benefit from economies of scale, but it’s not something you can just solve with infrastructure and fixed cost. No matter how many computers you have, you’re still going to do multiple orders of magnitude more calculations to render high end games than stream a song. And you’re going to deal with difficult load balances because every twelve year old gets home from school at the same time (exaggeration but point stands). GPU time costs money.
GPU time costs money but it's a fixed cost, doesn't matter what the GPU time is being used for, therefore it won't be per-game. The end.
For similar reasons, you’re not just going to rake in the money mining bitcoin because you bought a bunch of computers.
Or maybe to make the point even stupider, you could make a game about training neural networks same as you would on a real cloud service provider. If you can understand why google doesn’t charge a simple monthly flat fee for cloud computing of neural nets, you can understand why they can’t charge a simple monthly fee for computing neural nets in a game.
The entire reason this is happening is because Google has successfully captured what once was the holy grail of markets, education, and is doing things like this to keep people on chromebooks.
ChromeOS could be as important than Android. Especially with the Windows 7 extended life support coming up, Google has less than 4 years to convince people to transfer to their platform instead of Windows 10. I know it seems heretical to imply that it will happen but I think this is a good example that Google considers it not only a real possibility but something that they might actually have a good chance of being able to pull off.
The most important thing to realize is that they are focusing on gamers as a trial, the people who are price sensitive and who have to move first off of Windows 7 since they can't pay for extended support even if they wanted it. They can then continue this into the workplace. This is a brilliant move by Google.
Google is not just trying to win in console gaming space here - it's trying to establish itself as the browser monopolist and it seems to be doing it without much subtlety.
Monopoly laws exist for a reason and hard experience and I'm of the opinion that a single large player having complete domination even if they also have technical superiority is something that probably should be broken up.
I think this can be read in two ways:
1. You're the only product, therefore you're the best (and the worst too, at the same time I guess). In which case I want to point out you can be a monopoly without controlling 100% of the market. There are usually always smaller competitors around— you just happen to have a share large enough or the assets necessary to control what happens ¯\_(ツ)_/¯. De Beers, for example, was considered a monopoly when it controlled 90% of the world's diamond production.
2. You're the one that came on top in a market with other competitors. Therefore, you must be the best.
This is assuming the only way to eliminate players off the market is by being "better" than them, but that is sadly not the case. For example— in Mexico there's a monopoly over the telecommunications business, and part of the reason it happened and stayed that way was due to support from the federal government and political corruption. Microsoft has had monopolies over several software markets— not because they were "better", but because if a better product came to be, they'd either buy it or build one that was built-in to Windows. IE was pretty bad, in many ways worse than FF, but also came built-in.
Democracy and a free press in theory should act as a retarding measure but somehow that’s gone off the rails more (or I’m more aware of it than I used to be and it’s always been that way), social media and the internet has changed the landscape, We have a sitting president screaming fake news at news where they have incontrovertible proof..often his own words from previous speeches and interviews.
The world has gone haywire and at a time when globally we need more unity to address the issues facing us as a global society the very bastions of that global society are getting beaten with a stick.
I wonder what the world is going to look like 2050, I’ll be 70 if I’m still around.
I mean, this doesn't stop competitors, but it's quite the moat to climb.
Google uses open standards and business practices to appear like the good guys, but make no mistake they are abusing their market power.
I mean when will you see ads of Stadia on the Google homepage? What happens when you will search for Assasins Creed in future? Oh, look we have it right here at Google, no need to leave our ad platform.
I've got a school age stepson nothing there either.
It's only been in the last two or three years that the ecosystem around them has really matured enough that they can compete with Windows machines and iPads on anything other than price. With the way schools' budgets work, we've really only just passed the early adopter stage.
Most schools aren't buying fleets of Pixelbooks. They're buying chromebooks from companies like Acer and Asus which make devices that retail in the $200-400 range.
And right now from the outside it feels like there is a ramping internal politics going on with ChromeOS, Android, PWA, Flutter, Fuchsia, Kotlin, Dart teams, with upper management giving free reign and let the best win kind of stuff.
I really, really doubt that. I don't think you understand how ubiquitous Chromebooks are becoming to the education space. Last I heard, in the US, 60%+ of all school provided computers are Chromebooks. School SysAdmins love them because they're dirt cheap and can be provisioned quickly.
From Google's perspective it's great too. Between Google Classroom and the way chromebook device management works, students have to have a google account to be able to go to school. There's rules on what data can collect, but still, kids are forced into the Google ecosystem at a young age.
ChromeOS doesn't need the pixelbook to survive, it provides an enormous amount of value on its own.
> And right now from the outside it feels like there is a ramping internal politics going on with ChromeOS, Android, PWA, Flutter, Fuchsia, Kotlin, Dart teams
100% agree there. I've been hearing for the past 3 or 4 years that Android and ChromeOS were going to be merged and nothing has yet to come of it. It seems like even Google doesn't know what is going on there.
Being only king of US school system isn't something that holds long term in a product roadmap.
Dude... I'm sorry but you are so wrong. Like I said originally, I literally just left this industry after working in it for years.
US spends more money on education than pretty much any other nation, both per student and as a total dollar amount. The reason the numbers are so low world wide is because chromebooks in education are a relatively new concept. Everyone has been going after the big fish, which is the US.
Additionally, they way you need to handle student data in the US is fairly consistent across state lines, which means you don't need to customize your solution very much to be able to sell to all 60 million students. Once you go overseas, you'd need to sell across multiple country lines to be able to find a pool of students that big (unless you're targeting China, Russia, or India which all have their own issues).
If you don't believe me, here's a blog post from a few years ago where google literally say ChromeOS is here to stay and then they focus heavily on it's benefits to education. https://blog.google/products/chrome/chrome-os-is-here-to-sta...
Even if you want to ignore all of that, I don't think you realize how much of a PR nightmare it would be if Google just stopped supporting ChromeOS right now. Schools have spent hundreds of thousands of dollars buying into this ecosystem. For schools that buy at the district level, it's in the millions. Most schools/districts don't have the budget to just replace all their computers overnight. Shutting down ChromeOS would pretty much fuck all digital learning in a lot of school districts for years to come.
PS. I'm pretty sure iOS's adoption numbers US & worldwide (not in schools, just total consumer adoption) match up pretty closely with Chromebooks, so there goes your idea that only dominating the US market isn't a viable business strategy.
Furthermore, the source didn't exactly do a great job with their research. This is a direct quote:
>Pixelbook is a Chromebook, meaning it runs on Google's Chrome OS software and is only capable of using internet-based applications.
>>The entire reason this is happening is because Google has successfully captured what once was the holy grail of markets, education, and is doing things like this to keep people on chromebooks.
Care to share how this tidbit is true?
>>The most important thing to realize is that they are focusing on gamers as a trial, the people who are price sensitive and who have to move first off of Windows 7 since they can't pay for extended support even if they wanted it. They can then continue this into the workplace. This is a brilliant move by Google.
What gamers are still on Win7? Most are already on 10. Most legit gamers care about input lag, which a streaming service will never solve vs. having hardware wired to your monitor. Most legit gamers use a wired mouse for gods sake because of input lag introduced to wireless mice.
Sorry to use your words against you, but I'm not so certain. I'd like to see real statistics from a few sources. I agree that Windows 7 is surely phasing out, but anecdotally (which is where most conjecture like yours and mine come from), I know a number of people who specifically stayed on Windows 7 for two reasons: 1) Compatibility with games; and 2) No telemetry (or at least it's considerably minimal compared to Windows 10).
I expect the number of Windows 10 players to increase as DX 12 becomes a common requirement, but we're not quite there yet.
I would say 60+% qualifies as most. On the other hand, ~25% is not negligible. So you both win ?
Lots of filthy casuals just care that they can game. If they can, somehow, why shouldn't they buy the Chromebook? Hell, if that's what they had in school, they might have grown up to like precisely the genres of games that are easy to design around the lag.
Most legit gamers care about input lag
The majority of people don't notice < 40ms round trip lag, though "legit gamers" often do notice. < 30ms, round trip lag, and few people outside of hardcore FPS players will notice. Get to < 20ms, and you get down to rounding error levels. Don't just ask me. (Hobbyist game dev.) Ask gamedevs and companies who have conducted testing.
40ms roundtrip lag for most of the world seems pretty achievable to me.
Pinging google.com [22.214.171.124] with 32 bytes of data:
Reply from 126.96.36.199: bytes=32 time=6ms TTL=50
Reply from 188.8.131.52: bytes=32 time=8ms TTL=50
Reply from 184.108.40.206: bytes=32 time=9ms TTL=50
Reply from 220.127.116.11: bytes=32 time=10ms TTL=50
Ping statistics for google.com:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 6ms, Maximum = 10ms, Average = 8ms
I'm on GDC public WiFi.
--- google.com ping statistics ---
12 packets transmitted, 12 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 74.750/75.566/76.724/0.587 ms
I'm on 1GBit Google Fibre in Orange County which should be as ideal an experience as could be expected. Now, considering there would be additional time on top of the pure ping, I image there would be 80-90ms of latency for me. No way lol.
Pretty sure both of us have a Google datacenter nearby, but could the result of your ping be due to a bad configuration on your router?
I could totally make RTS and other real-time games work with that.
Unfortunately, any and all trust I had in google maintaining its services for more than a few years is long gone. I have games from ~10+ years ago I can still download, update, and play via Steam; What are the chances this lasts more than 3 years?
In addition to that obvious concern, I'd also be very hesitant to let Google have any control over my gameplay experience. The downsides of the gaming as a service just have no appeal to me, although I'm sure it would be functional in certain generes, for a certain type of gamer.
Regardless of how many streamers are playing Skyrim in their browser, I don't think I'll be joining them.
Mario Kart on the Wii still works. Online play is broken though because Nintendo has no interest in running old servers to keep a service in a long discontinued game running.
If any games are developed solely for Stadia, will they disappear if the platform disappears? Even if it doesn't, will they be playable in perpetuity, or will Google shrug breaking changes if it's for games that are over N years old, and have under K users?
Now if a shoe only works for a week, I'm not going to buy it. If it works for a year, sure I'll take it.Similarly for a digital good (phone, computer, software) we need to come up with a baseline on what the acceptable number for life is. It cant be Infinity, but it should be reasonable.
This all ties into ownership. If I own something I have control over it, so I can re-sell it if I'd ever like to re-coup some value or let someone else make use of it. I can lend it to a friend, and have it back to use or lend again. I can give it as a gift to a friend or family member. I can keep it indefinitely as a memory. All of these are reasons why some people still have a NES, GameCube, etc. It's probably not that they've been playing on the console for 30 years. They either kept the NES they owned as kids, or bought one second-hand for the nostalgia, or got it as a gift from someone who knows how much they like classic games, etc.
There are still have tournaments for games on old consoles. Old games are perfectly good games— they don't have the same wow factor, but they are as fun today as they have always been. Will future "classic games" be lost to history, or available only when a publisher decides to monetize the nostalgia of the public with a re-release?
The better analogy is: when you buy a book, do you expect you can read it again in ten years? People who bought 1st pressings of Beatles albums 60 years ago can still play them with their grandkids. I can still play old SNES with my siblings at Christmas. But imagine a 2019 being unplayable in 2029. It's reasonable to be concerned.
I saw the OnLive demo. It was very slick, and the concept seemed more than viable, it was inevitable.
> the company had deployed thousands of servers that were sitting unused, and only ever had 1,600 concurrent users of the service worldwide
They were burning through all of their money because they highly overestimated the audience. That's one of the main problems the cloud was made to solve.
Traditional GPGPU customers, and machine learners, usually use CUDA instead of OpenCL / VulkanCompute / DirectCompute. At least from my position is looks that way. I’m a freelance developer and my clients are picking CUDA in ~75% cases. The rest is DirectCompute, if computing on client PCs and hardware compatibility is required.
Furthermore, there are other technological advances that have come even in the past 2 years that help facilitate this, like Secure Reliable Transport protocol (https://www.srtalliance.org/).
Things were just too stacked against OnLive at the time to make it work, but most of those barriers have since been removed after they had gone under.
I suspect that Google will have better infrastructure than OnLive, and that their customers will have better Internet.
It's not the edge cache itself that matters; it's that many ISPs peer directly with Google.
Google Cloud can route the audio/video/keyboard packets mostly over Google's private network and then only use the public internet once it gets to your ISP (or their transit provider). This provides Google with more control over how the packet gets to the end user.
Google provides a similar service to Google Cloud customers as the "Premium Network Service Tier".
It's the challenge of edge computing datacenter design.
However, a gaming session means to have a dedicated daemon running for you somewhere. I doubt this can be deployed anywhere on the spot, but perhaps they have some amazing technology for that.
CDNs aren't helpful if every stream is unique.
Unfortunately for me at least, what I want is more akin to a cloud VM where they can host my games and I stream them wherever, which this isn't it. I've already got games between Steam, uPlay, Origin, etc. I have zero interest in buying a game 2x just to be able to have more portable play.
It'll probably be a mix of pay-once games, lots-of-games-for-a-subscription, and pay-once, play-until-Google-shuts-the-service-down games.
Not only you will need a really good internet connection and wifi, but also being close to a Stadia data center. The best possible case scenario will probably be a latency of around 50 ms, but I'm guessing the average will be closer to 100 ms which is too much for many types of games (specially competitive games the streamers play).
Heck, even streaming in your local network isn't such a great experience these days.
I love the idea, but realistically we are still very far away from streaming games replacing consoles or PCs. Many companies have tried (Nvidia, Sony, etc) but no one has succeeded for the simple reason that latency is not there yet.
The only time i found it lagging was similar to the times when both of the above lag: during particularly complex visual scenes (i.e. you're circle strafing around a target and the entire screen is constantly redrawing). I thought it was great for playing a game casually: i.e. story mode. Lots of people use that phrase as a put-down, but the system is well suited for a game like ACO where you are mostly being tactical, planning, exploring, and moving the story forward.
I think of a game like 2016 hitman: i hesitated to install 30G of it to my ps4, but if you told me i could drop into the demo/prelude in less than a minute, even at 720, it's a very appealing concept for somebody like me that plays video games the way other people watch netflix while they're eating dinner: basically whenever i have some downtime and want to dip into a story or mechanic i like for ~30 mins.
Note that even Steam Link over Ethernet is too laggy to play KB/M FPS at any serious level - the input lag makes FPS almost unplayable.
I can see the growth of services like these, especially with more gamers being unable to access dedicated hardware, but there will always be a niche for dedicated hardware. The only way I think they could solve that problem with streaming is using some sort of hybrid streaming approach where some of the UI is remote and some is local and they could do client side simulation somehow for input.
i don't doubt that consoles will remain useful, but i think that services like this will satisfy a pretty legit niche for a vast swath of games that aren't really dependent on low latency input (i.e. puzzle/turn based/rpg/simulators/board game conversions) and that are often 'discovered' by people finding letsplays on youtube.
for some variety of mmorpgs, i can imagine devs being excited about the reduced surface area for cheating/exploits. for somebody like me that uses a mac, i'm looking forward to playing a version of Civ that doesn't cause my laptop to sound like it's about ready to take flight. i don't think the idea is meant to replace consoles, though more, it seems like a way to grease the wheels of commerce and get people playing (and buying) games that they've been traditionally priced out of because of the not-insignificant startup cost of building and maintaining a gaming pc/console.
Every game has an extremely high number of potential combinations and outcomes, so it's effectively uncacheable. Fan that out to Steam level popularity and diversity of games and it sounds a bit nuts.
The latency is good from my experience. There are occasional resolution drop but that happens <1 per hour. Mind you my experience was based on playing ACO which probably is not sensitive to latency.
I also played Nvidia Shield Now on Mac while it was beta. I would say Project Stream is a much much better experience (no need to signup a new account is a plus, no client required a HUGE plus).
I work for Google, opinions are my own.
I agree but I don't consider this service to be for hardcore enthusiasts. The enthusiasts will continue to buy their powerful gaming machines because they really care about having the best experience.
I think this is great for someone like me, who likes video games, but doesn't want to spend that much money on a computer. Before I would not be able to play the latest titles because my computer is 6 years old, but this would make it so I could.
There are probably tons of kids out there whose parents would scoff at the idea of giving them a $1000+ computer but now would be able to play the latest games with this technology.
All that assuming it works, of course, which is no guarantee.
> There are probably tons of kids out there whose parents would scoff at the idea of giving them a $1000+ computer but now would be able to play the latest games with this technology.
Isn't a console affordable enough in that case?
It's a niche market overall, and each is a different segment you can target independently.
Nothing can deliver the ~25ms input latency that local gaming can, but when you can't game locally it'll do.
And games already have latency from input devices, inputs registering, the game calculating the next frame, the monitor displaying it, and the user perceiving what has happened on screen. Every millisecond of network delay gets sandwiched between all of those.