Seriously, diff patching has been a thing for decades and we're in the age of artificially-restricted bandwidth allocation. If Unreal can't get with the times, then Unreal needs to go away and make room for engines that bother to use modern practices.
Note I'm not working inside this industry, this is just pieced together from news observation.
UE4 has an append mode to add extra content in separate .pak, but if you are using Steam to distribute it doesn't save you anything. I believe it understands pak files enough to decompress, delta encode, then recompress.
Maybe they've never heard of it?
Distribution and application is generally the harder part of diff patching though and I suspect most early-access games prefer not to worry about it (though 40GB seems a bit further along than early access!)
Not sure how long this has been brewing, but I imagine it provides a significant performance boost.
Anyone got any experience using it from the previews?
Great work Epic!
The feature can be dynamically turned on or off though, so devs can work with that :)
We still have issues with transparency around the clip plane area.
You might have to set your nearfield scale based on the contents of the scene. It wouldn't work if there were say, some buildings at 200 meters and then mountains a kilometer away. But if you're in a cathedral with some chandeliers high up on the ceiling, cheating to remove the parallax of the chandeliers will not detract from the VR effect.
Your stereoscopic vision is good/useful only out to about 30 meters. Beyond that it's really diminishing returns.
I have an Actor that rotates a child Static Mesh through a series of meshes (via "Set Static Mesh") to display a number 0-30. When I preview or launch the game, the mesh ends up rendering with some crummy-looking pixelation:
So far I've tried: ensuring Settings->Resolution Scale->Engine Scalability is set to 100%; switching between Temporal AA, FXAA, and MSAA; disabling motion blur; restarting the editor multiple times; create a whole new project from the basic/minimal template and drop my object into an otherwise-empty scene. Nothing's gotten rid of this jaggedness.
Another example: https://i.imgur.com/tuvcAA8.png - FXAA is turned on yet the edges are all jagged and pixelated.
If you use the new forward renderer you can turn on MSAA and get super sampling along the edges too but it is still going to be there to some degree. 2xMSAA is only going to get you the equivalent of two bits of translucency information for each edge pixel.
I switched from Temporal AA to FXAA because it interacted oddly with the changing Static Mesh. When it switched from one number mesh to the other, the new one would jitter and sort of dissolve in until it settled out. I have another object with a texture that rapidly changes, cycling through a set, and Temporal AA causes ugly artifacting there which settles out once the texture stops changing. (This happens both with texture streaming on and off.)
edit: video of what I'm talking about - https://www.spinda.net/files/temporal-aa.mp4
For a sort of flipbook mesh like that I could see it happening too. Not sure what your best option is. You can change around some variables on how big the temporal AA history filter is, but if you lower it enough to get rid of all that ghosting you may end up as bad off as the 2xMSAA in terms of how much AA you actually get out of it.
Your best bet may be trying a higher level of MSAA, r.MSAACount will control the level of it. With really dense meshes it can begin having a high cost.
Thanks for the help! I'll go forward with this for now and see if I run into performance issues. Around how dense is "really dense"?
I also checked Application Scale in Developer Tools->Widget Reflector, but it was already at 1.0.
How is this even possible? Just trying to simply scroll through that page! Are there any other Software that gets the huge list of improvement every few months?
Ars article is just a high level summary and some of their older articles, where as the original article are the actual patch notes.