I’ve been working on a volumetric rendering shader and lately I’ve been running into some performance issues that don’t appear to make much sense. When the scene is first loaded I’m easily hitting around 60fps, however overtime this can drop to single digits. I’ve looked for all the usual suspects in the profiler (per frame GC allocs, memory leaks etc.) and the only thing it’s really telling me is that my shader is slowly taking more and more time to execute.
I have various early-out style optimizations in place & to eliminate these as a potential cause I’ve tested restricting my raycasting loop to always iterate the same number of times per pixel. I’m also testing with a static camera to remove transform/projection changes as a potential cause as well as removing any other animation from the shader. Essentially keeping the exact same loops & data per frame. Still I see these performance drops over time.
Incase it helps, the per-frame flow of the shader is this:
From LateUpdate I perform my raycast shader, rendering to a RenderTexture
From a PostRender delegate I perform a combiner shader and then use GL methods to draw a fullscreen quad with the result. The reason for doing this rather than in a ImageEffect is I can use HW depth testing to clip against the scene rather than manually sampling a depth texture in the shader.
As mentioned, performance starts great but slowly decreases. The only difference is how quickly/consistently it takes to drop. At this point I’m wondering if it isn’t just my crappy GPU overheating.
If anyone has any suggestions as to what could cause this from a Unity POV, I’d love to hear them.
Quick update. After writing this I figured I’d check the GPU/CPU temperatures while running my shader to rule out overheating. Neither GPU or CPU ever reach anything close to critical temp. and simply fluctuate around a base level as you’d expect. Meanwhile the performance continues to slowly drop.
So, not totally convinced by my quick test that it wasn’t an overheating issue, I spent a bunch of time doing more thorough temperature tests. Turns out it was totally thermal throttling! Living in Florida + volumetric rendering makes the lowly GT 650M in my MBP cry.
Letting the computer run cool and then opening the GPU/CPU temperature readings side by side with Unity, I get 60fps while the temperature slowly rises. Once it reaches a certain point (evidently Apple’s safety cutoff) my framerate starts to drop as do the temperatures. I was able to reproduce this every time. The reason for the single digit FPS is where even with throttling, it can’t get the temperature to low enough level, quick enough.
Finally, I cranked the AC in our house for 20mins and ran the tests again. Same thing, except this time it took a lot longer for the framerate to drop and when it did, it bottomed out around 45 FPS.
All in all I consider this a win, although I use the MBP for a dev machine it’s definitely not the target HW for this asset. It’s also been a good reminder that heat + laptops should always be factored in when doing heavy 3D work.
I’ve been posting random videos of my work over the last few months to my twitter/youtube.
Got a nerd boner. Anything I can buy or test? Also on the subject of volumetric rendering there are some really nice optimisation slides for cloud rendering from the killzone developers http://www.guerrilla-games.com/publications.html
More tricky. The clouds are rendered using 3D textures + raycasting to a RenderTexture, which is then placed in the scene over the skybox. Imagine a sphere around the camera (the atmosphere), each frame is generated by raycasting from the camera, as the ray moves through the atmosphere it accumulates cloud particles (the 3D textures) and calculates the lighting from the sun/ambient light for those particles. By animating the 3D texture offsets and thresholds you can get some really nice animated clouds that are 100% 3D in the scene.
The real trick is using temporal reprojection to spread the work load over several frames so that it runs at a decent framerate. It’s based on the awesome cloud tech used in Horizon Zero Dawn, which was discussed at this year’s SIGGRAPH.
Haha! I should be submitting to the Asset Store any day now. Thanks, yup that paper’s been a huge help in getting this working!
Nope, it’s all done in a fragment shader. I’m definitely curious to see what alternative optimizations could be done using compute but I’ll need to upgrade my dev machine before I can investigate.
Hurry up old bean! There’s hordes of us waiting with willing, open wallets : >
In the meantime could you tempt us with what features it has? One of the downsides to a solution like Truesky is that there’s no real art direction to it - you can’t really organise the look of your sky - and yours hints this sort of thing might be possible?
Another thing - it looks like it might be usable for things other than just cloud - ie low lying fog or even smoke!
The main thing I’ve been working on this past month (or 2, damn UnityEditor APIs are painful) is a full cloud editor. It’s essentially a cloud painting app inside Unity, allowing you to fully art direct the clouds in your scene. I’ve posted a screenshot below. You can check out an early Vine I posted of it in action here: https://twitter.com/kode80/status/657955993498820609
All useful properties of the cloud rendering are exposed; density, noise threshold, atmosphere size, animation speed/direction, ambient colors (for both near + far) as well as various optimization properties for controlling how many frames it takes to resolve + resolution the clouds are finally rendered at.
On top of this, while the easiest/quickest way to get clouds in your scene is to use the in-built editor - it’s also possible to ignore that and generate the coverage map yourself at runtime. This means, if you were so willing, you could create a weather system that dynamically updates the coverage map based on some form of simulation, giving you clouds that react to the weather/geometry in your scene. This is something I’m considering possibly as an addon asset (depending on time/demand).
Also, regarding other non-cloud uses, this is something I’ve thought about a lot while working on this. A similar approach could definitely be used for smoke etc. but would require a slightly different pipeline. This is something I want to pursue further down the road.
All I ask for is speed, unlimited bagfulls of it. Perhaps even rendering lower res or something with a composite - I’m not worried how but speed means I might actually be able to fit it within my PS4 budget (which is 120fps vr).
I’m terribly impressed and this doesn’t happen much in hippo land.
Can you talk a bit more about 3D clouds ? are they like what you would get if you slice off a cloud into layers and represent them with gradient colors on a flat surface ? Then the shader masks bunch of piled up layers according to the texture gradient ?
Also, can you use them on the ground surface like 3D volumetric smoke or should they appear between a certain angle to work correctly ?
8ms is a pretty tight budget, especially considering you’d want to render them twice for stereo. Using the Unity profiler, I’m reading around 8ms per frame for the clouds. Now, considering that’s on a crappy GT 650M and the profiler itself adds a lot of overhead (in some cases I see 2x fps when it’s not running) - that seems to loosely match with Guerilla’s 2ms target on their PS4 implementation. Of course, without doing hard tests on actual HW it’s difficult to say how realistic that is in either direction.
All said, there are lots of areas for optimization, especially for VR. This initial release I’m focused on getting it out the door, but I have both Oculus DK1 + DK2 sitting upstairs & am planning to look into that properly post-launch.
I’d be happy to work with you to see what we can do on the PS4 side.
In the loosest sense, sort of, but not really. The correct term is raycasting, but calling it raytracing probably conveys it better. I’m literally rendering the cloud’s ‘scene’ per pixel to a separate buffer and then simply mixing that with the geometry scene in post. The clouds are fully volumetric, modeled using 3D noise textures, with a bunch of math to make them look like clouds and not just your typical noise. The noise itself is perlin for the wispy parts and worley for the billowing parts. There’s also a curl noise texture used to distort, which gives nice curling to the edges of the clouds.
At every step of the view’s ray, I cast a second ray towards the sun to calculate direct lighting. This not only gives good general lighting results but gives you self-shadowing between clouds.
Currently it’s strictly atmospheric clouds. You can move the camera, including up/down, and get correct parallax, because the clouds exist in absolute 3D space in their own ‘scene’. The clouds will always appear ‘behind’ the rest of your scene however, this was a design decision for my asset to support the most likely use case (i.e. realistic animated 3D skyboxes) and not necessarily a technical limitation.
It would be entirely possible to have the clouds intersect your scene’s geometry, however this would require additional steps in the pipeline as I’d have to read the scene’s depth buffer to handle occlusion/intersection and this would also alter the reprojection step slightly. I experimented with doing exactly this early on, it works, but again it adds more steps and I wanted to keep the performance good for the most common scenario. Definitely something that could be worked into another asset though. Same thing for smoke, mist etc.
Yep sounds brilliant. I’m all for speed so if it is faster ‘behind’ the scene, then behind the scene is where it will go! Looking forward to the VR optimisations as well. I’m a way off finishing my game so that’s fine! I’m happy you’re into vr as well!
One possible optimisation for VR I guess is rendering it once, because that far away, you can probably get away with it being ‘flat’…
Now the pain begins: waiting for it to pop up on asset store.
Mobile will likely hinge on if it does 3D textures and is faster than a wet potato…
I love the new vid. One thing I see that’s a little off is that in the far distance, they seem too noisy, perhaps there could be a way to clean it up a bit?
I haven’t even attempted this on mobile yet. It’s definitely something I’ll take a day to look at post launch, to see if it’s even feasible. AFAIK all the current generation iOS devices do support 3D textures, the real issue would be raw fragment processing power. Running it at a drastically lower resolution, I’m sure it would at least run - the question is how much fidelity you would have to sacrifice to get an acceptable framerate.
One thing I forgot to mention is the cloud editor also has the option to export high quality cube maps, expressly for supporting lower powered devices. If you don’t need full realtime updates in your project, this might be an option.