This looks great! I’m just waiting on some Realtime GI solution for URP!
This is looking good, the non-ray traced SSGI in HDRP is pretty decent but this looks slightly better.
Please share a demo file to check its quality and performance during development
This looks awesome!
What would make it even better, if it can also do reflections! Is that a possibility?
Is there a way to render GI on a larger Camera FOV and then overlap the GI effect on MainCamera? so the Screen Space GI can get the illumination from the light that is not visible on screen.
If such an effect can be achieved, it can be an alternative to the dynamic GI solution but with much better performance.that mean more people would like to pay for it.
Hi everyone!
I’m working on H-Trace almost every day, but all tests are still done in the Cornel Box scene, so I’m not posting new screenshots because they won’t be really different from those that I’ve already posted here. I’m still working on the thickness rendering, nearly finished. It’s very important to get it right if we want to avoid over-occlusion and get nice and accurate shadows. I’m also trying to make it as automatic as possible and as fast as possible, so it takes time.
Now, onto the questions:)
Thanks! I’ll see what I can do after I release the HDRP version. I heard that URP is close to getting a custom pass system similar to the one that HDRP has now. If it’s true, the porting process may be easier than I expect.
I was thinking about making a demo for the Asset Store. Not 100% sure about it yet, though. Anyway, at the current moment I’m trying to invest as much resources as possible into the actual development process. So, I’m afraid, the demo will have to wait until I’m nearly finished or the asset is submitted to the store. The good news is that it will happen quite soon (I hope).
Thank you!
I was also thinking about the reflections, but I can’t say anything for now, because: 1) I haven’t tried yet 2) I don’t have a lot of experience with SSR. The author of one of the papers I use mentions that, indeed, it may be possible to make reflections (at least in theory). So, no promises, but I will definitely look into this after the release.
I’ve seen this approach (sometimes called “guard band”) in different papers regarding AO. First, a double-camera setup is a no go. It is too costly in HDRP and it may be an overkill for such a scenario. There are techniques (Multi-View AO) that make use of two and more cameras, but that’s a different story and it’s certainly not something you want to have in HDRP. Second, it won’t be a silver bullet anyway, because you’ll need something like 360 FOV to cover everything and even then there’s no way to retrieve any data from behind the objects.
But there are other ways to achieve the desired result. For example, I’ll try to support the fallback on the reflection probes and, if possible, the new adaptive probe volume. Both are not really dynamic, but better than nothing. Moreover, my friend and I are also working on a VXGI tracing that is planned as a more robust fallback when the screen-tracing fails. Again, no promises here, but there are plans to make it a full-scale GI solution. But first I have to get as much as possible from the screen-space part.
Currently there is no out of the box Fully Realtime GI solution for Unity. I hope your GI solution be the first and only solution and you can find maximum popularity among Unity users
I’m waiting to see your GI solution on a real scene (HDRP sample scene is a good one)
Can’t wait to try this, I’ll buy it for sure. I only need it for pretty screenshots and videos so moving to HDRP was the most logical conclusion for me.
Thanks! But remember, that at least for now it’s a screen-space effect. So all (or most of) screen-space limitations are present. I’ll try to test on the Sponza and HDRP sample scene soon.
Thank you! Btw, have you tried path-tracing in HDRP? If you don’t care for real-time performance, there’s nothing that can beat a path-traced result.
I don’t have an RTX card atm, I would have used it by now.
Just wondering, how is the performance compared to Unity’s SSGI?
Hi!
Depends on the comparison method, because both H-Trace and Unity’s SSGI have a number of parameters that can be tweaked. For example you can set any number of ray-steps for Unity’s SSGI. The same goes for the sample count for H-Trace. But ray steps are not samples, so we can’t just type in the same number into both and compare. Next we have denoisers. Enabling denoisers also impacts performance. Furthermore, we have Unity’s native TAA which also acts as a denoiser in this case. By setting a high sample count in H-Trace you can get rid of the noise using only this native TAA (no additional denoisers required). But do we count the performance impact of the TAA itself in this case? All in all it’s a complicated question.
I have an initial comparison in the first post (under the spoiler) where I tweaked settings of H-Trace and Unity’s SSGI to achieve roughly the same performance. This way you can understand what visual output each of them yields under the same performance cost.
From the technical point of view, SSGI seems to trace 1 sample per pixel (maybe I’m wrong, but it’s definitely not a big number). The radius of sampling is controlled by a user with the "Max Ray Steps’’ parameter. And in order to sample across the whole frame, you have to use some ridiculous number, like 1000, which will absolutely murder performance. And even if you do that, you still get 1 sample (or a couple) per pixel. So you’ll need to heavily denoise the result. Thus, two denoisers. This denoiser combination leads to a visible temporal accumulation lag and some artifacts.
H-Trace, on the other hand, samples across the whole frame by default. And it can trace hundreds of samples per pixel, maintaining real-time performance. This results in a natural low noise level and there’s no need to use multiple denoisers (or use them so heavily, at least). The downside is not so accurate thickness detection (which is called “Depth Tolerance” in SSGI). But it has been dealt with, and now there’s a mode that can potentially render even more accurate thickness than Unity’s SSGI. It’s not free (from the performance point of view), but it’s optional.
P.S. I’m in the middle of the actual testing and it’s not fair to make the final comparison yet. Things may change, new issues may arise. Plus, H-Trace is not battle tested yet, and it will mature and improve in the future (if someone finds it useful). I will post some screenshots, numbers and visual comparisons soon.
Will you allow us to control the intensity of the screen space effect? I find something like this could be useful blending dynamic objects with baked lighting, I wonder if controlling the intensity would allow the GI to not over contribute to an already baked scene
Sure! If you find this useful, I will make a slider to control the intensity. I will also try to make use of the "Receive SSR/SSGI’’ button in the material inspector tab. An ideal scenario would allow to completely disable H-Trace on the per-object or per-material basis. It may be useful if a portion of your scene is baked (or uses some other type of GI), so you could enable H-Trace only for non-baked (dynamic) objects, therefore correctly mixing H-Trace with your baked lighting.
Thanks so much! I will definitely be getting this when it releases, my final question is does this play nice with Unity’s SSR and or reflection probes? I’m not a graphics programmer by any means so I don’t know if reflections is something that would bug out your system.
Also I saw in your GTAO thread that you would also be able to do specular occlusion thanks to bent normals, is that something that will also make it to h trace?
This looks fantastic! Curious, is this only supported in deferred rendering or would it work in forward as well?
Hey I’ve have tried 3 different types of GI’s for unity built in like SSIL, SSGI and SSRT but none seem to be working for me if not built in could this possible be converted to urp and how soon will the project be out for release looks very promising can’t wait.
Just came across this - looks definitely interesting, good job!
One quick question: I’ve been experimenting with Screen Space GI stuff (such as Pascal Gilcher’s SSRTGI implementation, which looks quite similar to your approach?) but depending on the case screen space just simply does not provide stable enough information (for example in my use case, which is really high end LED volume rendering).
So, I’ve thought of the possibility of adding a DXR raytracing pass for those rays that don’t hit anything on the screen space. If the screen space part works well, you don’t have to shoot gazillions of them, so the performance should be pretty bearable. I already made a simple test that shoots DXR rays from G-buffer.
Have you considered this?
br,
Sami
You can enable both H-Trace and SSR (or any other screen-space effect) at the same time, no problems with that. One limitation is that H-Trace GI won’t be visible in the reflections provided by Unity’s SSR. That’s because HDRP renders all its native screen-space effects before H-Trace can be injected into the pipeline. I hope this limitation will be lifted, once Unity adds an option to directly override screen-space effects. One of the devs mentioned that it’s being worked on. Or maybe I’ll just do some custom SSR myself
Reflection probes - sure. They are a completely separate system. If you had H-Trace enabled at the time of the reflection probe baking - it will be visible in the reflection.
Yep, GTAO and Screen-Space Bent Normals are also available in H-Trace. I’m planning to implement GTSO (Ground Truth Specular Occlusion) based on them. And probably Micro-Shadows as well.
Thank you!
H-Trace needs _GBufferTexture0 (Diffuse Buffer) to work properly. This buffer is generated only for the deferred-compatible shaders (Lit shader), as far as I know. So, deferred only for now. However, all other required buffers are there for both deferred and forward, so I can try to generate the diffuse fuffer myself to support forward. If it’s possible to do this fast enough. HDRP is not super flexible when it comes to this stuff. But I’ll look into this as soon as all other major parts are ready.
Thanks!
Can you elaborate on what assets did you use and what exactly didn’t work out in your case? It’s always interesting to hear feedback on different GI methods.
I’ll try to port it to URP after the release.
Thank you!
I can’t tell which approach is used in Pascal Gilcher’s SSRTGI for sure, but judging from the “Ray Step Amount” parameter it has, it’s probably closer to a regular SSGI tracer like the one you can find in HDRP.
As for screen-space instability, well, it’s a general limitation of these effects. It’s impossible to make it go away completely, but there are ways to make it less noticeable and distracting. I can’t claim that H-Trace is better or worse than other screen-space effects in this regard, because I haven’t tested it in all possible scenarios. However, I did take some measures. For example, there’s an advanced thickness mode which uses a separate backface depth buffer to render object thickness accurately in cases where it’s impossible to derive correct thickness data from the regular depth buffer. I’ll write about it in the next update soon.
Yep, I thought about something along these lines. I want to try VXGI (or maybe SDFGI) as a fallback when the screen tracing fails. I’m not sure about DXR, because you need compatible hardware for that. But it probably makes sense to add it at least as an option for those who have such hardware.
Thanks for the reply! I’ll be patiently waiting for this amazing asset! I really admire all the technical work and skill that goes into these techniques.
Have you thought about releasing the project on the asset store as is and updating the features as you go along? I know the Unity community would chomp at the bit to buy a screen space GI with just the results you have in the OP even without all of the features listed in your short term plan.