Can't figure out error on Graphics.CopyTexture().

I’m getting an error when using Graphics.CopyTexture(). It reads “Graphics.CopyTexture can only copy between sane texture format groups(d3d11 base formats: src=76 dst=9)”

Now for context, I wrote some shaders that allow data to be stored into rgba chanels, and translated into my game and that all works perfectly.

I’m adding a “base info texture” for the artists and designers to use, where they can create a texture in photoshop or something with the info in the channels, and the system that sets up the shaders will use that data as a base point instead of a blank texture.

The Graphics.CopyTexture call is from a Texture2D that is serialized, to a RenderTexture that’s created on runtime. RenderTexture can’t take a TextureFormat parameter to read from the Texture2D to match it, it only takes a RenderTextureFormat enum. When I Debug.Log the Texture’s format, I get that the imported png texture from photoshop is DXT5, no matter how I switch the colour profiles or bit depths. DXT5 isn’t even an option in the enums for RenderTextureFormat so I can’t set up my render texture to have a compatable format with my input texture.

Does anyone have a solution? Is there a way to export from Photoshop as something RenderTextures can be formatted as? Is there a way to set up my render texture to use DXT5?

src=76 is DXGI_FORMAT_BC3_TYPELESS aka DXT4 or DXT5
dst=9 is DXGI_FORMAT_R16G16B16A16_TYPELESS

BC3 is a compressed format. You can’t have a render target with a compressed format because the GPU can’t compress on-the-fly. However, you should be able to copy from a compressed texture to another compressed texture as long as it is not a render target (and the same format).

However, you could use Graphics.Blit instead of CopyTexture which will decompress the texture on-the-fly. It is slightly more expensive because it runs a shader for each pixel that gets copied.

Alternatively, you could also disable compression for the source texture in the import settings.

Yeah, I have it so if the input texture is null, the RT is set to half bits because I don’t need that much precision on the encoded data.
I couldn’t find the import settings to have the input texture match that.

You saidI could just graphics.blit it? That’s an option for sure, it only would run on awake so it’s not that expensive. Is there a default grahics.blit shader or should I just write a cheap shader that copies the pixel?

There is a default one. Just use this overload:
public static void Blit(Texture source, RenderTexture dest);

By the way, R16G16B16A16 is still a lot of precision. Usually only need that for floating point or integer textures. Normally, you would use 8 bits per channel for a color texture unless it’s HDR.

PS: Did you read the links that I provided?

Thanks for the help!

And no I hadn’t realised they were links as I was at work when reading it and the mobile client didn’t show the links.

And yes, my apologies, I had originally because of the naming scheme thought that ARGBHalf would be half the precision of ARGB32, however 32 was for ALL channels, Half was each channel, so thanks for helping me notice that as well.

No worries. Yeah, the Unity naming convention for the formats is confusing.

image

I have a weird bug that causes all my models not to show up what is the problem