Hacker News new | past | comments | ask | show | jobs | submit login

Everyone copied Apple when they removed the headphone jack, whose to say consoles in the traditional sense will still be around in 20 years?

Microsoft/Sony are NOT going to sit back and let Google win here. Expect replies from them (especially with Microsoft's Azure).

The streaming services of gaming is coming.




> The streaming services of gaming is coming.

It's already failed multiple times. OnLive tried & failed. Nvidia's has been in beta for years. Sony has one.

Everyone in this area has tried this. Nobody has seen what could be described as "success", and the cost models so far have been ludicrous. Turns out renting Xeons and Radeon MI Instincts in a professionally staffed, maintained datacenter is way, way more expensive than a handful of consumer chips in a box in the living room with nobody on-call to monitor it.

The GPU here looks to be basically a slightly cut-down AMD MI25. That'd make a single GPU in this stradia cloud gaming service costs more than 10 xbox one x's. How do you make that price-competitive here?


A big difference would be that OnLive had space in 5 colo datacenters in the US. Google has 19 full datacenters around the world and are building more. Plus, google has their own very large fiber network from different POP's and ISP's around the world. The fiber backbone gives them lower, and more predictable latency, compared to multiple upstream ISP's with different connections, issues, etc.


Since not everybody is playing at the same time, a single GPU will service multiple players (each of whom would require a console).


On the other hand, everyone on the east coast (therefore using east coast edge nodes) will be playing from 8-11 EST when the new Wolfenstein game comes out, so how is stuff rationed? Do you make people queue until there is a node close enough to them available? Do you sell the spare GPUs to people in GCP to use for their compute on off times to make up the cash? Do you make it $40 per month?


I think this comment is super underrated. If America is asleep, you can't really use that capacity for players in Europe, since the latency would increase. Likewise if Europe is over capacity, you can't really just assign players to a US server.


And (while I realize you're oversimplifying for the sake of example), it's not just per-continent in this case, but something more akin to per-metro-area.


Google can have GPU count near player count but when there is no big demand they can be used for other types of computations.


Yeah, if you know where to look, they left clues about using MI25 hardware. (I haven't been an employee for years, this all unfolded afterwards and, ironically, it is just one search away.)

I'm sure they got bulk/promotional pricing from AMD, plus they're very good at both running hardware with low overhead and packing it efficiently.


> plus they're very good at both running hardware with low overhead and packing it efficiently.

You can't really pack the hardware here since it's latency sensitive. It's straight dedicated resources to an array of VMs. Dedicated CPU cache, even, hence the odd 9.5MB L2+L3 number.

Bulk pricing only gets you so far here. You're still talking gear that's categorically way more expensive than similar performance consumer parts. Not to mention all the other costs in play - data center, power, IT staff, etc...

Making this price-competitive is a big problem


You can't do time slicing, no, but you can definitely reduce time to first frame in many ways. If you don't do that, you need to provision even more hardware. Packing is also part of the capacity planning phases of a service.

The other costs (power, people, etc.) are amortized over Google's array of services.

Last but not least, it would be very dumb of them not to run batch workloads on these machines when the gaming service is idle. I bet $1000 these puppies run under Borg.


> The other costs (power, people, etc.) are amortized over Google's array of services.

Power doesn't really amortize, and neither does heat.

And capacity still had to increase for this. They didn't just find random GPUs under the table they forgot about, and now that they have a massive fleet of GPUs it's not suddenly going to start handling dremel queries.

This all still costs money. A shitload of it. Someone is going to pay that bill. More ads in YouTube won't really fund gaming sessions. So will this be ad breaks in the game? No way that's cost-effective for the resources used. Straight-subscription model? This seems most likely, but how much and how will you get people to pay for it?


Maybe it wasn't AMD, but they already had a massive fleet of GPUs. It wasn't running Dremel, either. Or maybe they found a way to do that, too, I don't know, but there are already enough workloads at Google to keep GPUs well fed.

I know from experience that Google is very cheap. You tell Urs you saved a million dollars and he'll ask you why you didn't save two. Or five.

If this takes off, the pricing of the service will pay for the hardware (assuming they did a reasonable job there of baking it in). Even if it doesn't, organic growth from other, much larger Google services can make use of the idle hardware.

For the record, I was involved in a couple of projects that required a lot of new hardware. One of them even ended up saving the company a lot of money in a very lucky, definitely unintended way.


>They didn't just find random GPUs under the table they forgot about, and now that they have a massive fleet of GPUs it's not suddenly going to start handling dremel queries.

This strikes me as rather amusing. Google was having such trouble getting their hands on enough GPUs that they decided to build custom hardware accelerators (TPUs) to fill the gaps.

I'm sure they'll find a use for these.


Or to look at it the other way, it's a Vega 64 with double RAM so Google probably pays $600 or less. Google doesn't pay enterprisey gouge pricing.


It'd be a Vega 56 basis not Vega 64 but the problem is that "double the ram" part.

HBM2 memory is super expensive. Like, rumors are 16GB of HBM2 is $320 expensive. Toss in anything custom here and there's zero chance this is under $600/GPU.

Even in the hotly contested consumer market the 16gb HBM2 Radeon VII is $700. And that doesn't have any high speed interconnects to allow for sharing memory with CPU or multi-gpu.


> How do you make that price-competitive here?

Could they be using these GPUs for other purposes during idle times (AI model training, cloud GPU instances)?


They have TPUs for their AI stuff, and you still have to dedicate these resources while gaming sessions are active. How much monetary value can they really get out of the idle population here to offset the active usage?


You underestimate how much Google tries to squeeze out of all the machines in its fleet. That includes old ones, sometimes to comical effect. A colleague at my current job told me about utilization targets at Amazon, where he used to work. At Google you could choose to be that wasteful if you really wanted to, but you'd lose headcount. Be more efficient and you'd get more engineers. I.e. you decide if you'd rather get machines or people.

There's also an old paper by Murray Stokely and co. about the fake market that was created to make the most use of all hardware planetwide.


Not really, the Samsung Galaxy S10 still has a headphone jack. And Samsung being the largest Android device manufacturer, makes a big counter point.


Sony launched Playstation Now 5 years ago. This is not a particularly new idea.


Microsoft already has project xCloud in the works.

https://blogs.microsoft.com/blog/2018/10/08/project-xcloud-g...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: