ComputeShader In-place texture read/write?

Is it truly possible in Unity Pro to create a ComputeShader that can do in place texture processing (read and write)? I’ve been fussing with this for a week without success, so, maybe someone definitively knows the answer to this question.

After spending a lot of time reading MS DirectCompute documentation along with Unity documentation and endlessly searching the Unity forums, my current understanding is that the answer should be yes, with the following contraints:

  1. Create the texture as a RenderTexture with enableRandomWrite=true.
  2. The texture type must be a “single component” format such as ARGB32 for shader read reasons.
  3. In the ComputeShader, create a RWTexture2D or RWTexture2D . It is not clear to me if ARGB32 is internally a 32-bit int or uint.
  4. In the ComputeShader, the RGBA “byte” components must be unpacked (read) from the RWTexture2D for calculations and then packed (write) back to the RWTexture2D . To do this I am using the DX11 D3DX_DXGIFormatConvert.inl file which has inline unpacking and packing conversions between various types. These typically have << and >> bit shifts and masks etc.

When doing this and just trying now to write arbitrary data into the RWTexture2D without worrying about reading, I either get a black texture or a white texture depending on what values I left shift into place, but no in-betweens and no color, even if I try to just shift say a red value into place. Overall it does not seem to matter whether I use a or a RWTexture2D, but the symptoms change up somewhat even though neither produce the desired results.

Has anyone been able to get this to work?

As I continue to work on this, I keep coming back to questioning whether I am looking at the Unity ARGB32 texture type correctly. In ARGB32 internally a uint? The Unity docs only say that it is an “8 bit per channel” format. In most implementations I have worked with, uint’s would be used for an ARGB32 representation.

The other question I have is whether Unity uses a little-endian internal representation for byte ordering? As mentioned above, I am doing DX unpacking and packing in the ComputeShader for the RenderTexture data elements. I’m assuming DX is using little-endian.

The reason I ask these two questions is that in my ARGB32 based RenderTexture in my ComputeShader, I can make the alpha transparency of the rendered texture “wiggle”, but in a very strange way as I vary the alpha component that I am setting via alpha<<24 . Even though I can make transparency wiggle, I cannot make color wiggle. I get either white or black, never any other colors. Seems like I am doing something fundamentally wrong.

This is strange. Color and transparency assignment work in my ComputeShader for a ARGB32 UAV RenderTexture if I treat it as a float4 array. Is there any possibility that the Unity documentation is wrong and an ARGB32 texture has 32-bits per channel rather than the 8-bits per channel documented?

Of course if this is the case, I will not be able to read (index/load) from this texture in my ComputeShader since it would be a multicomponent pixel format.

I’m still struggling with ComputeShaders and ARGB2 textures. Any help anyone could give me would be appreciated.

Here is stripped down C# script and HLSL .compute code. As you can see, I create a Texture2D and a RenderTexture that are each ARGB32 format and of the same dimensions. The RenderTexture has enableRandomWrite=true . I put random Color pixels with alpha=1 in the Texture2D and also initially copy that data to the RenderTexture via Blit. After the Dispatch, in OnPostRender() I copy the RenderTexture back to the Texture2D via ReadPixels. I have confirmed that both of these textures render on the primitive as expected until I call Dispatch().

The ComputeShader has a Texture2D that the C# script assigns the ARGB32 Texture2D texture to and a RWTexture2D that the C# script assigns the ARGB32 RenderTexture to. All this compute shader does is assign the input uint’s to the output uint’s . But, the copy does not seem to work correctly and the eventual rendered texture has been converted to grayscale with transparent pixels, even though the original Texture2D initialization used alpha=1.

I have tried everything I can think of and just cannot understand what the ComputeShader is doing to my input data. I’ve confirmed that I can manually set the output pixel values in the compute shader and those display as expected. [Edit] Actually, I cannot set the output texture values manually when referenced as a a , but I can if they are referenced as a . I’ve tried to manually translate from uint to float4 via bit shifts, but that doesn’t work either. This still seems to be related to how ComputShaders treat ARGB32 RenderTexture’s.

I’ve reduced this down to the bare minimum even though what I really intend to do withe the ComputeShader will be more complex.

// note, this script must be attached to the main camera

using System;
using UnityEngine;
using System.Collections;
using Object = UnityEngine.Object;
using Random = UnityEngine.Random;

public class CameraTextureComputeShaderScript : MonoBehaviour {

	public ComputeShader TextureComputeShader0;  // this must be public so it can be set in the inspector!!!!!!!!!!!!!
	protected int TextureCSMainKernel;

	private RenderTexture outputLifeTexture1;  // a random write texture  for TextureCSMainKernel()
	private Texture2D inputLifeTexture1;  // a readable texture

	int texWidth=1024;
	int texHeight=1024;

	GameObject primitive;


	// Use this for initialization
	void Start () {
		if (TextureComputeShader0!=null)
		{
			TextureCSMainKernel = TextureComputeShader0.FindKernel ("TextureCSMainKernel");

			inputLifeTexture1 = new Texture2D(texWidth, texHeight, TextureFormat.ARGB32, false, true);
			inputLifeTexture1.name="inputLifeTexture1";
			TextureComputeShader0.SetTexture (TextureCSMainKernel, "inputTex1", inputLifeTexture1);

			Color[] pix = new Color[texWidth*texHeight]; // SetPixels takes Color[], rgba
			Random.seed=12345;
			for (int p=0; p<(texWidth*texHeight); ++p)
			     pix[p]= new Color(Random.value,Random.value,Random.value,1);  // rgba
 			inputLifeTexture1.SetPixels (pix); 
			 inputLifeTexture1.Apply ();

			outputLifeTexture1 = new RenderTexture(texWidth, texHeight, 0, RenderTextureFormat.ARGB32,                                      
                            RenderTextureReadWrite.Linear); 
			 outputLifeTexture1.name="outputLifeTexture1";
			outputLifeTexture1.enableRandomWrite=true;
			outputLifeTexture1.Create ();  // otherwise not created until first time it is set to active
			TextureComputeShader0.SetTexture (TextureCSMainKernel, "outputTex1", outputLifeTexture1);
			RenderTexture.active=outputLifeTexture1;
			Graphics.Blit (inputLifeTexture1, outputLifeTexture1);
			RenderTexture.active=null; 
		}
		
	
		primitive=GameObject.CreatePrimitive(PrimitiveType.Plane);
		primitive.renderer.castShadows=false;
		primitive.renderer.receiveShadows=false;	
		primitive.transform.rotation= Quaternion.AngleAxis (90, Vector3.back); 
		primitive.transform.Rotate(new Vector3(90,0,0),Space.World); 
		primitive.transform.position= new Vector3(-20,10,0);	
		primitive.transform.localScale= new Vector3(1,1,1);	
		Material material= new Material(Shader.Find ("Unlit/Transparent"));
		primitive.renderer.material=material;
		primitive.renderer.material.color= Color.white; 
		primitive.renderer.material.renderQueue=4000;  // force renderqueue to be after all other transparencies
		primitive.renderer.material.mainTexture=outputLifeTexture1;
	//	primitive.renderer.material.mainTexture=inputLifeTexture1;
	}
	
	
	void Update () {
			
		if (TextureComputeShader0 != null)
		    	TextureComputeShader0.Dispatch (TextureCSMainKernel,texWidth/32,texHeight/32,1); 
	}

	void OnPostRender()  
	{   // we are still in the render frame at this point
		if (outputLifeTexture1 != null)
		{
			RenderTexture.active = outputLifeTexture1;  // copy RenderTexture to Texture2D
			inputLifeTexture1.ReadPixels(new Rect(0, 0, outputLifeTexture1.width, outputLifeTexture1.height), 0, 0);  
			inputLifeTexture1.Apply();
			RenderTexture.active = null;
		}
	}
}

And now for the ComputeShader:

#pragma kernel TextureCSMainKernel


Texture2D<uint> inputTex1;     // a readable Texture2D that was defined as ARGB32
RWTexture2D<uint> outputTex1;  // a writable RenderTexture that was defined as ARGB32  


// threads per group
[numthreads(32,32,1)]  
void TextureCSMainKernel (uint3 id : SV_DispatchThreadID)
{
       outputTex1[id.xy]=inputTex1[id.xy]; 	 	
}

Hi,

I don’t know if it’s exactly on-topic and if it can help anyone but to my knowledge there are 2 methods to copy one texture to another :
-using SampleLevel
-using Load

I think the SampleLevel method is better when you are manipulating images because you can set the sampling state and the clamping/repeating is managed by Unity.
The Load method will return 0 if you are outside of the texture bounds, so i think it’s better when you put non image data in the texture.

Here is a sample code:

ComputeShader :

#define blockWidth 32
#define blockHeight 16

Texture2D<float4> input;
SamplerState samplerinput;
RWTexture2D<float4> output;

int textureWidth;
int textureHeight;

#pragma kernel SampleCode
[numthreads(blockWidth, blockHeight, 1)]
void SampleCode(uint3 id : SV_DispatchThreadID)
{
	float2 coordinates = float2((float)id.x / (float)textureWidth, (float)id.y / (float)textureHeight);
	output[id.xy] = input.SampleLevel(samplerinput, coordinates, 0);
	// or
	output[id.xy] = input.Load(id);
}

C# code

void SampleCode(ComputeShader computeShader, Texture2D texture, out RenderTexture renderTexture)
{
	int kernelIndex = computeShader.FindKernel("SampleCode");
	int blockWidth = 32;
	int blockHeight = 16;

	renderTexture = new RenderTexture(texture.width, texture.height, 0);
	renderTexture.enableRandomWrite = true;
	renderTexture.Create();

	// Set all the necessary buffers
	computeShader.SetInt("textureWidth", texture.width);
	computeShader.SetInt("textureHeight", texture.height);
	computeShader.SetTexture(kernelIndex, "input", texture);
	computeShader.SetTexture(kernelIndex, "output", renderTexture);

	// Dispatch
	computeShader.Dispatch(kernelIndex, texture.width / blockWidth, texture.height / blockHeight, 1);
}

There are different SamplingState you can use, the detail is in the doc at the following address:
http://docs.unity3d.com/Documentation/Manual/ComputeShaders.html

Note that I considered your textures would by multiple of (32,16), so in the sample code I didn’t check the thread ids were in the textures bounds to keep the code clearer.

5 Likes

It seems to be so:
Doesn’t matter which texture format you use 8bit,16bit or 32 bit you still have to define it as float in your compute shader.
Having done some performance tests it seems to work just fine and performance is not impacted. In other words float will become UINT if you are actually passing an 8 bit texture to it.

Can also define it as half, the performance will stay the same.