Doing this on-premise is also pretty tantalizing - I watched a video recently of Linus Tech Tips where he built a 64 core Threadripper and used virtual machines to replace four physical computers in his home including two players simultaneously gaming.
so basically a mainframe? I can't imagine it's economically viable though. a 64 core threadripper costs more than eight ryzen 3700x and clocks lower.
I personal think it's a super awesome setup, but it's _definitely not_ for the faint of heart. You have to _really enjoy_ debugging crazy shit for it to be worth it.
But even at my "budget" $2400 range, a $20/month for 4 users of some game streaming service, would buy me about _ten years_ for that price.
The economics of a streaming service are really quite killer.
//Edit: Using a Ryzen 5 2600 and a Gigabyte X470 Auros Ultra Gaming
- Follow the instructions in this video for getting a dump of your vbios: https://www.youtube.com/watch?v=mM7ntkiUoPk (you can stop once you've gotten the vbios)
- Make sure your Unraid is updated to at least 6.7
- Read the "New vfio-bind method" section of: https://forums.unraid.net/topic/80001-unraid-os-version-67-a...
- Use the knowledge gained from that to add an IOMMU group assigned to your GT710 to /boot/config/vfio-pci.cfg
- Reboot your Unraid server
- Do the normal gpu passthrough thing for the GT710 for a VM, but add the dumped vbios to the "Graphics ROM BIOS:" field in the VM "edit" gui
Hopefully it should work :D
One thing to keep in mind about the vfio-pci.cfg file is that it's effectively a blacklist, and if you do something that could change your IOMMU group assignments (such as adding or removing a PCI device) you could end up inadvertently blacklisting a PCI device you don't intend to. All you need to do is update the IOMMU groups in vfio-pci.cfg to fix it, but it can freak you out if you're not expecting it.
(For example if I remove one of my GPUs, one of my SATA controllers will inevitably end up getting the IOMMU group that _used_ to belong to a GPU, so it'll get blacklisted, and two of my array drives will appear missing until I update the vfio-pci.cfg to match the new IOMMU groups)
You can pick up a used i5-3xxx or 4xxx system with 8GB of RAM real cheap. If you get one that has at least a 300W power supply, you can then put in a GTX 1660 or similar and do quite well with most games. Max settings @ 1080p for some older games, and still decently playable at decent quality for the newer ones. And all for $300 USD or less.
I just put an RX 580 with 8GB of video RAM into my old i5-3570S system. The card was just $135 at Microcenter, because it was open-box.
Even then you'll need 27 months (2 years and 3 months) before being more cost effective than Geforce Now. That's considering that you are on a desktop though with a CPU powerful enough.
Personally I started doing cloud gaming mostly because I was on a netbook at school (I liked the form factor and the battery life was quite nice versus most laptop). At first on OnLive, then on LiquidSky, and since a year ago, on Geforce Now.
I don't believe the service will stay at 5$/month though, both service I used shutdown because they weren't viable and they were in the 10$ range. LiquidSky was supposed to come back, but they missed multiple release date now and with the release of GeForce Now, I have no hope. Shadow seems more viable at 25$/month, which makes your solution much more cost effective.
Well he also built a 6 PC watercooling loop, so yeah not all of it is particularly applicable to the average viewer
When I saw his LAN gaming center I though it would be cool if he'd retry that project for that. :->
With the kind of workshop they are building with Alex, it's clear they are aware of that. Let just hope they can do more of them quicker!
I don't think the core speed would matter much - Steam reports 37% of gamers are using 3ghz-and-below processors so they may be mediocre but still competent:
I think personally I'd opt for building four or five separate machines and managing them as a cluster though.
In some sort of software engineering commune?
I've also found it fun to live with people with varying interests, living with just other software engineers might be boring. Common hobbies are great though. Imagine every night being a lan party.
Half of the SWEs I work with don’t game.
* Can run two OS at one time, each with half the resources
* Can run one with full resources if needed
* Can have multiple linux and windows installs
* Can have snapshots of installs
* Takes around 5-10 seconds to swap one of the running installs for a different install
* Can run headless VMs on a core or two while doing all the above, ie a test runner or similar service if needed
I use a 49" ultrawide with PBP, have one GPU connected to each side, so the booted installs automatically appear side to side, and Synergy to make mouse movement seamless, etc, etc
It took a little work to set up, but I've worked this way for ~ 3 years now and never had to think about the setup after the initial time investment, and during upgrades. Highly recommend it.
I definitely can see the advantage for a small team for having a single large machine with multiple GPUs, and letting them sit at a thin workspace and "Check out" whatever install they want to use, how much CPU power and RAM they need, etc, clone and duplicate installs and get a copy with their personal files in home, ready to go, and can check out larger slices of the machine when they have more CPU/GPU intensive tasks, that's probably my ideal office machine, after using my solo workstation for a while
The only real hurdle hardware wise is your IOMMU groups and your CPU compatibility, if you have a moderately modern system it should be a problem.
I also have a couple of inexpensive PCIe USB cards that I pass through to the guests for direct USB access, highly recommended.
The guides will use qcow2 images or pass through a block device, as I mentioned I have a giant LVM pool, I just create lvs for each vm and pass the volume through to the vm as a disk , and let the vm handle it. In the host you can use kpartx to bind and mount the partitions if you ever need to open them up.
This is like the Arch Linux of computer hardware. You don't install Arch Linux to get shit done. You do it because you like building and configuring, and to learn about Linux.
I'm on Mac at the moment and frequently pine for Arch. 'What on Earth just happened...', 'What did that...', 'What is this...', 'Well that wouldn't have happened on Arch.'
People claim Mac 'just works' because of out-of-the-box readiness, well I'd say in contrast Arch 'still works' or 'keeps working': it does what you tell it and if you don't tell it to change, it stays working in the same way.
I just wish I could pay a company that would have engineers and designers building an actual user-friendly, stable OS on top of Linux. Mac OS, with all its faults, is still miles ahead in terms of usability and polish.
Not if you "borrowed them from work for testing"...
And that's the core problem, isn't it? Aren't games notorious for being mostly sequential workloads?
As 4 core/8 thread machines become basically the minimum you can assume most pc gamers will have we'll probably see devs making more use of multithreading everywhere.
I see what you did there.
If you go for a 12 or 16 core Threadripper like the 2920X or 2950X you're only looking at $425/$690, which is $106/$173 per "PC".
Combine that with the savings you get from not having to buy 4 cases, PSUs and motherboards and I think a multi-head threadripper setup will end up significantly cheaper than buying 4 machines.
I tried a round of FPS with it. It wasn't that bad, honestly. My ping to the GeForce server was about 12ms, and the ping from GeForce to the game server was < 2ms. That's actually lower latency than I'm used to with an Xbox.
It did feel a bit laggy, though, despite the low latency. But I'd try it again. For me the benefit is in resource-intensive single player games.
Either way, for $5 a month, it's a great deal!
I then bought a dummy HDMI plug, and installed some open source software called Moonlight on my Mac and Linux systems.
I can stream my games locally no problem from my VM. Eventually I want to get wireguard set up, and test performance over VPN, but I really haven’t had a reason to try it just yet.
If I can figure that out, may be in love.
The direction we're going in is massive parallel computing, not massive chips. Threadripper represent the pinnacle of individual chips but I'd imagine having 4 boxes that are each 3800X + 1GPU + 128 GB ram would be cheaper and easier to maintain both hardware and software wise? Except it wouldn't sound as fun.
I wonder if using Wireguard could improve the performance of a custom solution.
I dunno if they'd eat that cost again, and I wouldn't try it on purpose.
But keep in mind that they have an incentive to do this: it's the cost of acquiring users without scaring them about a possibly humongous bill.
I now have better billing alarms set.
Under section 4, anyone can set up an aws cost budget
It seems to follow their practice of building the “API first” solution. By the way, a lambda/programmatic solution allows you to disable lower priority resources and retain higher priority resources. It’s a lot of effort, but it’s incredibly flexible.
That hasn't been my experience with AWS but then again it has been years since I allowed such a thing to happen.
I know this is for private use but the only point of game streaming is so that companies can put ads in stream.
Setup details: I use Paperspace since it has an image with everything already configured (doing this is surprisingly tricky, I was never able to get GPU drivers installed and configured myself after hours of trying and also trying various AMIs). It has auto-shutdown after an hour of inactivity. I use VirtualHere to forward a joystick to the host, and Parsec for streaming. It works great, and I pay $.50/hr plus $5/mo for storage. Over WiFi with the cheapest Comcast plan, the latency is about 30 ms, which is fine for those games.
Concretely, update the “solutions for cloud rented pc’s” section at this link: https://support.parsecgaming.com/hc/en-us/articles/115002601...
(Which the client points at)
On paperspace, you need to set up a paid public IP, or deal with higher latency. Now I know what ZeroTier is, but my network latency was 100ms under load vs 30 with the public IP...
It took me four hours to debug this. Also, why does the server send every other UDP packet to a mysterious port on startup? (According to tcpdump on my router)
Also, in the question about broken mouse cursors, the correct answer is to close the paperspace web tab displaying the desktop of the instance.
I’ll definitely recommend parsec to friends. I got rid of my last windows box a while back, but I have a few windows-only steam titles that I’d like to play.
Moving forward, my gaming desktop is something like a 25% of the household electricity budget. I hate wasting all that power, so I really want to replace it with a thin client.
IIRC I tried the Parsec AMI but it was difficult to use and required a lot of fiddling. I can’t remember the details unfortunately, but I would have preferred to just use that instead of Paperspace (this might be a niche use case since I’m an engineer). If I could have figured that out, I would have written a CLI around it and open sourced it for others to use.
Also because I’m an engineer, I really wanted to read the source code to see how it worked. But I get that it’s your magic sauce :).
Thanks for the awesome tech!
My son's young enough that he still goes through phases. That and he leaves every summer to go to his grandparents'. Sticking with cloud gaming gives me the freedom to just throw everything away or stop using it for long periods of time without guilt.
Also, it honestly makes me feel better to know that I didn't buy a bunch of fancy hardware to to use 3.6% of the time. AWS rents the hardware out to somebody else when I'm not using it, and is strongly incentivized to get its utilization as close to 100% as possible.
And Stadia doesn't have either game.
If you run your own host, you can play whatever game you want, and mod it however you feel like. Otherwise, you're stuck with whatever curated stuff the platform provides.
Ah, I see we're alike.
At this rate I'll get to my ML workstation and train my own GAN at 2035.
Everybody's needs are different -- there's no reason to dismiss this.
But my experience with cloud gaming on EC2 has been pretty great too, as I’m getting older I feel less and less inclined to shell out for a new video card every two years.
HDR is pretty sweet, too.
HN is horrible for gaming/pc building advice. much better places.
I've tried 1440p @ 144 hz and 4K @ 60 hz, and I'll take the 144 hz any day.
> HN is horrible for gaming/pc building advice. much better places.
Absolutely. I come here for my tech news and discussion, but gaming? No way.
Last time I tried GeForce Now in beta, there was a very perceptible delay in input. Not sure if that’s still the case now.
I don't want to own gaming hardware - I don't want the space taken up, the hassle of the upgrades, and I'm a super-super-casual gamer so I don't want to invest in anything. It sounds perfect for me and I can't wait for it to be working well!
In case of AWS/cloud it's not just gaming but also always thinking about aws/technical stuff..
If you can afford to buy all games on steam and have 1gig internet - go ahead and use a cloudgaming service. But you'll see the video compression and you won't be able to play the great game you once bought at humblebundle.
My main desktop is dual booted but I never really boot into Linux since I often game, although I really prefer Linux interface+desktop. If Windows could be a kvm guest and GPU passthrough worked I would def go with that option.
edit: I found this which seems to give a good overview but I'm still curious if you have any tips/hacks for getting it working.
Building a rig and just putting Windows 10 on it and installing steam takes about an hour if you have done it before and about 6 hours if you haven’t and have a tutorial. You could just also buy a prebuilt rig for 10 percent more.
There are a FEW legitimate reasons to go this route, but not many good ones, but I have to commend this article for true hacker spirit. It's lovely!
Which ones are you referring to?
Hotel internet seems to be about 10 years behind the current technology at all times.
You can build some fairly impressive SFF PCs, but even an InWin Chopin breaks that volume budget if I've done the math right, and it's not a cheap hobby. They to me feel like a class above this.
But I don't think there's need to argue this further, I think the point that there isn't a one-size-fits-all solution has been made exhaustively, by everyone.
The setup I used is here:
...and has been kept up to date (it was annotated last month, and the config is stable).
The Azure solution sounds much more expensive than the prices quoted in the article
It‘s based on a Gigabyte x470 and a Ryzen 5 2600. The rig is watercooled (using ZMT tubing), which makes it super silent (and I don‘t need a heater in my office any longer. It‘s amazing how efficient watercooling is - yet, the heat still has to go somewhere).
The gpu is a Sapphire Pulse Vega 56. All in all, the complete rig consumes 80 watts in idle, and 130 watts when I‘m using the gaming vm for surfing. When gaming on it, consumption can go up to a reasonable 500ish watts.
I‘m also using a Thinkpad x230 as my „roadwarrior“.
While „the cloud“ might be cheaper, I love to have everything „at home“.
I suspect that a lot of the people saying that sort of thing is impossible just haven't tried it. Most of my friend group uses Shadow for everything nowadays, and I can't see myself switching back to maintaining my own hardware.
FPS games like Skyrim work great, even latency-sensitive games which require quick reactions like Thumper do pretty well. I get lower latency / better image quality using Quest+VD than I do a standard 1080p stream on my Steam Link.....
I think that the VR video stream is already a 180 degree stereoscopic format helps with the latency; you're only ever looking at a subset of the stream so when you move your head around it's the same latency as local VR.
Only noticeable thing is if you move your head really quickly to over your shoulders it can take a few milliseconds for the footage to appear.
Also the open UDP and TCP to 0.0.0.0/0 in the security group made me cringe. At least set it to your current IP /32 and perhaps set a script that watches your public IP and updates the SG automatically when it changes.
AWS inbound traffic is free but persistent storage is not so I downloaded the entire game whenever I spun up the AWS instance, which probably happened 10+ times. I still wonder how much that cost Steam in terms out outbound traffic....
Tried it for gaming and it was a great experience.
Gaming only works if you are near their data centers though - afaik US and NL; got a high latency in Germany, but that's not the their fault.
You'll find 10$ referral codes online to try it for free.
Reasonable prices and works very smooth.
I don't think that's gonna be much fun. Especially not if you're used to playing on a powerful local machine
It depends on the setup but encoding and decoding combined can be done in less than 7 ms (1 frame at 144 fps) on good hardware:
Last year on more modest hardware I would regularly see around 12 ms (1 frame at 83 fps).
Is basically the same implementation, I must say that keep the instance open const me a bunch of money in the end.
When not consuming media content this approach is better because the fine details look more crisp. In video the finer details are limited by a bitrate & key frame frequency.
Overall, i love the idea, but it was a huge pain and not worth my time.
I'm not an AWS user, but don't spot instances risk being shut down/paused/etc at any moment? It seems like a bad solution for remote gaming.
Based on that it's done before and didn't get that far ( Gaikai and OnLive ), but nobody seems to remember that.
OnLive was released when most internet connections were still crummy, and before HD capable
computers were embedded in TVs, smartphones and tablets. Despite all of that it still worked pretty well, but at that time I thought if I can run the game on my desktop why bother? Nowadays I don't have a gaming desktop capable of the newest games, so I think about it differently.
There is also a lot of gouvernement subsidies die electric cars and the range was not good.
There internet in those days was as good as it is now in Western countries.
We'll c, but I'll be happy to place an online money bet.
Reminder: IT is responsible for nearly 4% of CO2 emissions worldwide . Video streaming represents 60% of that .
In a nutshell, cloud gaming is like switching on an oven at full power with the door open. If a majority of people use that kind of solution, the catastrophic effects of global warming will be even worse.
The true problem is that we're not paying the real price of AWS and other cloud services. We're expecting our children to pay for the ecological impact.
All device electricity usage is another 20% (there’s no breakdown there, but most of the streaming device electricity goes to the screen, which you’d pay for with broadcast or dvd playback).
Video streaming servers should be relatively low energy vs. the other servers running in the data center.