Could you get this in Unity as soon as possible for testing, maybe in 6.5 alpha?
The Surface Cache can be configured to “follow” the camera. In this case “GI resolution” will be higher near the camera and lower further away from the camera. This mechanism enables support for large(ish) worlds because there will be much less “computation per square meter” far away from the camera. I hope that answers your question.
We are working on it and we hope to have it ready for testing (not ready for production/release) in 6.5.
hmm, will a world space aligned solution not always suffer from the same artifacts we get using apvs right now as well?
No, it is generally not true that any world space solution will have the same artifacts as APV. It ultimately depends on the design of the underlying data structure, i.e. how it discretizes and parametrizes the scene.
Surface Cache will not be perfect either (like most GI solutions) but it will adapt better to curved surfaces and non-axis-aligned surfaces than APV. Here’s an example:
(Note a tiny bit of SSAO was also added in this shot to make it read easier.)
One reason Surface Cache performs better is that parametrizes the surface instead of space, another is that it has a more flexible way to parametrize the scene (compared to axis aligned probe grids). But it will of course also cost more than APV.
these are good news!
So it is quite likely, that we might see a production-ready implementation this year?
I’d be interested to see how foliage behaves. This has been an ongoing struggle for us… our grass looks terrible in all current solutions. This includes HDRP’s SSGI, H-trace WSGI, etc.
We use simple opaque star shapes with alpha clipping (to shape the grass) and upward facing normals to make it look uniform and clean.
For whatever reason, real-time GI solutions tend to struggle with this type of geometry.
Assuming you are willing to upgrade to 6.7 and using URP, then yes, this is what we are aiming for (but still cannot promise anything).
As you point out, foliage is indeed always a challenge for any realtime GI solution, including ours. The fact that we are aiming to build something that runs well on a broad range of devices doesn’t exactly make this challenge smaller.
That said, I think we are seeing some indications that we will handle foliage decently. Below I show some examples to give you a feel for it. Note that this does not represent the final version of the feature so please don’t make any conclusions just yet. Once we start our public testing phase we’d be happy to hear feedback from you so that we can iterate on any shortcomings you might find.
A while back there were announcements about Unity unifying its render pipelines into a single pipeline. However, I’ve been seeing that these features are now coming to URP instead. Does this mean the unified render pipeline plan has been cancelled or indefinitely postponed?
Second question regarding Surface Cache GI performance:
I’m working on a project with around 10,000 leaves rendered via GPU instancing (DrawMeshInstancedIndirect). I’ll probably skip realtime GI for this project, but I’m curious about how Surface Cache GI scales.
Does Surface Cache GI performance degrade as the number of unique surfaces in the scene increases? For example, in a scene with a very high number of mesh instances like this, would the surface count significantly impact the GI cost?
Also, is there a way to exclude specific objects (like my leaf character) from being considered by the Surface Cache system entirely, similar to how you can exclude objects from other lighting systems?
It doesn’t answer the question for me.
Floating origin usually means (depending on the implementation) that after the player has moved 3000m away from the origin, the whole world (including the player) is moved 3000m in the opposite direction, so that the player ends up back at the center of the coordinate system.
Another implementation is one where the world moves around the player constantly.
Either way - this can have a conflict with other world space data, such as NavMesh, world space baked meshes or world space lighting data.
If old lighting data persists while the whole world has moved, this will result in wrong shading whenever the world moves (either every 3000m the player has traveled or every frame, depending on the implementation).
To compensate this, it would be neccessary to have a mechanism to offset the world space data with the rest of the world.
Increase your sampling settings, this is a issue with unity being very obscure about the APV settings but the lightmapping settings DO impact APV so you need to increase samples.
Question for the devs, does the shader complexity affect the performance of the GI? Also, having the _EmissionColor property in the shader, will override anything connected to emissive. Is this intentional?
Apologies for the ping, but for a number of recent Unity editor versions seems like Surface Cache GI version is different from the URP version shipped so some errors are popping up:
Would love to get the up-to-date versions of both packages to test it as you iterate on it.
We recently posted an update about this over here: Render Pipelines strategy for 2026 - #316 by FirstMnM.
Surface Cache GI will only work with objects that are present in the scene. Procedurally drawn meshes will be ignored by this feature. This is because we maintain a stateful representation of the world in order to shoot rays efficiently.
Memory usage and compute cost increases with the number of objects in the scene (like is the case with features based on ray tracing). However, the cost of a ray is logarithmic in the number of objects, which is a very nice property (e.g. doubling the amount of objects won’t double the execution time – theoretically). And yes, we will provide a way to exclude certain objects in the scene in Surface Cache. Excluded objects will still receive GI but they won’t impact it.
Surface Cache won’t “move the world” so any world space data will work as before. It will only move its own focus point based on the camera, thus focusing resolution where it is more likely to be needed.

It depends on what you mean exactly, but generally no. The only “shader dependencies” that Surface Cache has is to the metapass which is almost always quite simple. So unless you do something very complicated in there, then no.
This is a very specific question. Can you please create a separate thread about this. Thank you.
I appreciate that you want to try it out, I really do. But the truth is we are not yet ready for public testing and therefore we are not providing any support for Surface Cache currently (because it is not a released feature, yet). I believe the particular error you are seeing should be fixed in 6000.4.0b11 but please note that we currently provide no support for Surface Cache, so you should in general not expect issues to be fixed at this point (this is why the feature is currently hidden).
Some time in the coming months, we will be inviting everyone in for testing. From that point onward we will be providing some support and we will be addressing issues. Once the feature ship as a public feature we will (of course) be providing full support.
you can modify it to cast as ulong, but after quick test the SCGI quality or performance didn’t seem to be very nice yet.. so need to wait for updates : )
I do indeed something very complex ![]()
My shaders are look like this:
Where some of the features can look like this:
So there is a lot of stuff that ends up in the Meta pass (mostly albedo which can sample multiple textures, triplanar, stohestic mapping, lots of masks, etc… emissive is a lot simpler). Did you every thought about having a new MetaGI pass where we can have a simplified representation of the material to speed up the GI contribuition? Aka Meta for accurate lightmapping / MetaGI for Surface Cache.
Edit: I guess I can simply write less to the Meta pass, the GI contribution doesn’t need too much detail anyways, regardless if it is used for lightmap, APV or surface cache.
Sure!
Yeah I understand that, I just couldn’t contain my curiosity, I managed to make it work:
Thanks for the tip.
So I tried it, it’s obvious there’s still much work and validation to be done but when it does work I recorded some performance metrics, I have 2 GPUs so I tried it on both:
1- First on the iGPU (Radeon 780M) since performance is a big goal for Unity, this is baseline performance without SCGI: ~107FPS, 3.9GB VRAM
2- Now on the same iGPU but with SCGI enabled: ~76FPS (-29%), 4.6GB VRAM (+700MB):
That VRAM number was the total system usage, if I close the editor the usage is 2.4GB, so what Unity was actually using before SCGI is ~1.5GB and after SCGI is ~2.2GB (+700MB).
3- Then I tested the dGPU (RTX 4060), baseline performance without SCGI is: ~243FPS, 1.7GB VRAM
4- And lastly with SCGI: 194FPS (-20%), 2.2GB VRAM (+500MB)
Few observations:
- So apparently it has a bigger performance tax the lower-end the hardware is.
- VRAM usage (+500/700MB) is a little high, I expect it to be optimized near public release.
- Not that Nvidia GPUs need any help but you probably can extract even more performance in the Hardware RT path by supporting SER and OMMs (they were recently added to DXR 1.2 standard), they’ll be useless in Software fallback tho.
- SCGI looks good already, there are bugs and artifacts but I’m not judging as it is still pre-release and very much in the oven still, but when it works it looks better than I expected.












