Do uninitialized RenderTexture areas lead to wrong texturing (tiled RenderTexture problem)?

I am working on a procedural planet engine and face a problem when creating a tiled RenderTexture which can hold e.g. a NormalMap for a larger number of planet planes.

Now I am trying to implement a hint by zeroyao from Unity3D in this very interesting thread about GPU instancing.

To sum the process of how I create a planet plane: To render planet planes, I pass over (per planet plane) a RenderTexture and a ComputeBuffer to a ComputeShader which fills the ComputeBuffer e.g. with position data and the RenderTexture with NormalMap Data. Everything is then passed to Vertex/Surface shader that displaces the meshes vertices with the ones from the ComputeBuffer, and applies the NormalMap and SurfaceMap RenderTexture.

What I am now trying is to create one large tiled RenderTexture (so, a TextureAtlas) where I draw into each tile e.g. a NormalMap of a plane. So in case if my normalmaps are 64x64 each, I create a large 1024x1024 RenderTexture I can use across many planes. The ComputeShader and the Vertex/Surface shaders are receiving the necessary Offset information to write (in the ComputeShader) and read (in the SurfaceShader) from the right location of the RenderTexture.
When done something similar with ComputeBuffers, so that lots (thousands) of plane scan share the same large RenderTexture and ComputeBuffer, Batching should start to do its magic, resulting in a lot less draw calls.

I am facing one problem now while implementing this for RenderTextures. I implemented a SharedTextureManager that on demand creates a new mega-RenderTexture and manages which of these slots are used. If a new plane requires a RenderTexture to create e.g. a NormalMap, it requests a slot (= a tile of that mega RenderTexture) from the SharedTextureManager. it receives the reference to the texture and some offset information. If all slots of a texture are used, the SharedTextureManager creates a new one and adds it to the list. Then the usual process keeps going, the RenderTexture reference including the offset information is being passed to the ComputeShader which fills the Pixels with the Normal-Information.


this.sharedTextureManager = new SharedTextureManager(nPixelsPerEdge, 1, 2, RenderTextureFormat.ARGBHalf);
quadtreeTerrain.sharedNormalMapTextureSlot = this.sharedTextureManager.GetSharedTextureSlot();
quadtreeTerrain.patchGeneratedNormalMapTexture = quadtreeTerrain.sharedNormalMapTextureSlot.Texture();


RWTexture2D<float4>  patchGeneratedNormalMapTexture;
#pragma kernel CSMain2
void CSMain2 (uint2 id : SV_DispatchThreadID)
  // Get the constants
  GenerationConstantsStruct constants = generationConstantsBuffer[0];

  [... calculate normals...]

  // Prepare Texture ID
  uint2 textureID = uint2(id.y+constants.sharedNormalMapTextureSlotPixelOffset.x,id.x+constants.sharedNormalMapTextureSlotPixelOffset.y);

  // Create the ObjectSpace NormalMap
  float w = constants.nPixelsPerEdge;
  float h = constants.nPixelsPerEdge;

  // Store the normal vector (x, y, z) in a RGB texture.
  float3 normalRGB = /2;
  patchGeneratedNormalMapTexture[textureID] = float4(normalRGB,1)

After the dispatch to the ComputerShader the RenderTexture and Offset information is pased to to the Vertex/Surface Shader. A _NormalMapOffset float4 is passed, where X and Y define the downscale of the tile compared to the overal texture, Z and W define the UV offset.


quadtreeTerrain.material.SetTexture("_NormalMap", quadtreeTerrain.patchGeneratedNormalMapTexture);
quadtreeTerrain.material.SetVector("_NormalMapOffset", quadtreeTerrain.sharedNormalMapTextureSlot.Offset());


  uniform float4 _NormalMapOffset;

  void vert(inout appdata_full_compute v, out Input o)
  #ifdef SHADER_API_D3D11
  // Read Data from buffer
  float4 position = patchGeneratedFinalDataBuffer[].position;
  float3 patchCenter = patchGeneratedFinalDataBuffer[].patchCenter;

  // Perform changes to the data
  // Translate the patch to its 'planet-space' center: += patchCenter;

  // Apply data
  v.vertex = float4(position);
  o.uv_NormalMap = v.texcoord.xy;
  o.worldPos = mul(unity_ObjectToWorld, v.vertex);
  o.objPos = v.vertex;

  void surf(Input IN, inout SurfaceOutputStandard o)
  // Apply normalmap
  fixed3 normal = tex2D(_NormalMap, IN.uv_NormalMap * _NormalMapOffset.xy +;
  o.Normal = normal;

=> Everything works fine until there is a “free” slot in the RenderTexture, “free” meaning not all areas of the RenderTexture were written to previously in the ComputeShader.

If the RenderTexture is 1x1 slots, everything is fine.

If the RenderTexture is 1x2 slots, everything is still fine.For each RenderTexture both slots were used.

If the RenderTexture is 1x4 slots, the texturing gets wrong, the area is textured grey. In the debugger you can see that only half of the RenderTexture was used (which is correct).

Although by using offsets to read from the RenderTexture, my impression is that this (uninitialized RenderTexture areas although not read from in the shader) seem to lead to this behavior.

Is that possible or the expected behavior when using RenderTextures? I was hoping that this wouldnt become an issue, as I guessed using the Offset information and only reading areas of the RenderTexture that were written too previously should work.
Does anyone know please, especially how to solve this? That would really help me a lot as I get to struggle a little (while I hope that getting this to work could be my breakthrough in the number of drawcall issues).

I was able to solve the issue myself. I had one RenderTexture.Release() command in my quadtree code called at the point a quadtree node is splitted. Of course this may not be done when working with a shared RenderTexture.

Works like a charm now. Only thing I now need to get under control is the bleeding due to the atlas.

Shared RenderTexture: