Usages of MemoryLessMode Render Texture?

What are the possible usages of the memoryless RenderTexture?

What amazing or non-amazing things can be done with it?

It’s performance seems like it might be good, and the lack of memory usage good?

My reading of the docs is that nothing can be done with it, but I’m almost certainly wrong:

Setting parts of a RenderTexture as memoryless saves some memory. Sometimes it is a lot.

Say you are rendering from a camera to a rendertexture, that rendertexture will presumably also have depth (because without it, the resulting image will be a mess), but after it is done you don’t really care about the depth data any more and you just care about the final color image.

Set the depth to memoryless and then the depth won’t be stored.

Same with MSAA (rendertextures become much bigger when MSAA is on, so they can store the extra samples used by MSAA, but if you really only care about the final image, you can set MSAA as memoryless).

I am not sure about the purpose of setting .Color as memoryless, maybe someone more knowledgeable than me will chime in.

Am about to reveal how stupid I am.

The renderTextureMemoryLess claims to be unreadable and unwritable. So how is anything gotten out of it?

This is how far back I am in not understanding this feature’s ability.

So, if, as you say, the depth is rendered into the memoryLess texture… it’s gone. It’s happened, but like a tree in an empty forest, I can’t see how anyone heard it fall.

Sorry, I know this means there’s more to explain, if you’re to have any hope of getting this into my dense head.

1 Like

The manual is fairly clear on this though.

Say you have a 1920x1080 rendertexture you want to render with MSAA. Since MSAA requires taking extra samples, you need extra space. So for 2x MSAA you need double the ram, since you potentially need 2 samples per pixel in the worst case scenario.

But in the end, you only really care about the resolved 1920x1080 image. Setting the mode to MSAA memoryless, just gives you the resolved 1920x1080 image and discards the rest.

Same with depth, while rendering you temporarily store it on tile to be able to say “hmm is this pixel I’m going to render behind or in front the already existing pixel?”, so it’s useful to you while rendering, but after you’re done, there is no need to store it in CPU or GPU.

PS: I’m sure I’m oversimplifying things and there may be inaccuracies to what I’m saying, since my understanding of memoryless is not super-technical, but I’m fairly sure that what I’m saying is in the same ballpark as correct.

2 Likes

Perhaps I’m really, really miles away from understanding this.

The following are each Yes/No questions, to see how far off I am:

This memoryless renderTexture is something a camera can render too, and it’s actually in the tile renderer of an iOS device, so it’s the backing ‘buffer’ that’s going to be blit to the screen?

It’s therefore a 1:1 representation of the screen upon which the entire render of each frame is built up before blitting?

As such, the best place (in time) to put things into it would be before everything else (as a background), or right after everything else, like overlaying UI or particles or flares etc?

And since it’s tied directly to the tiles of the screen, it can’t be a different size?

Which is where I get lost about the MSAA - how does a 1:1 mapping of res help you make a MSAA smoothing?

A RenderTexture is a rendertexture and it can do whatever a rendertexture can do. Memoryless is not a different kind of flavor of a RenderTexture, it’s just a mode you can set for it on Metal and Vulkan.

The rest of your questions are generally about RenderTextures as a whole and the answer to all is “Yes, but only if you set it up / want it that way”, or “Not really”.

2 Likes

So here is my example script from another thread ( How to use memoryless? ) that I added comments that hopefully will help.

using UnityEngine;
public class memoryless : MonoBehaviour
{
    // I like using descriptors, but it's not really necessary to do it this way.
    RenderTextureDescriptor mainrtdesc;
    RenderTexture rt;
    void Awake()
    {
        // I'm setting it to be Screen.width and Screen.height, but it could be any resolution, it doesn't really matter.
        // Although since it is intended to be blitted to screen at the end, if it's lower than Screen, stuff will look
        // blurry.
        mainrtdesc = new RenderTextureDescriptor(Screen.width, Screen.height, RenderTextureFormat.Default, 24);

        // Setting Memoryless MSAA to save some memory
        mainrtdesc.memoryless = RenderTextureMemoryless.MSAA;
        mainrtdesc.useMipMap = false;
        mainrtdesc.msaaSamples = 2;
    }
    void OnPreRender()
    {
        rt = RenderTexture.GetTemporary(mainrtdesc);

        // Setting the camera in my game to render to my RenderTexture instead of the back buffer.
        GetComponent<Camera>().targetTexture = rt;
    }
    void OnPostRender()
    {
        GetComponent<Camera>().targetTexture = null;

        // Here I'm just blitting my RenderTexture to the backbuffer (when you set target to null in a Blit, Unity renders to backbuffer), so we can see it on screen.
        Graphics.Blit(rt, null as RenderTexture);

        RenderTexture.ReleaseTemporary(rt);
    }
}

// This script in general is pretty stupid, it does practically nothing. Presumably for this to make sense, I would
// Blit with a material / shader, so I can do some processing to the RenderTexture before Blitting to screen, or
// it could be the start of a chain of Blits to do all my post processing, before blitting the final RT to screen.

// For example, here is a chain of Blits I do for our custom post FX
/*
            // Downsample
            TapMat.SetFloat(ShaderID.brightdepthID, DepthBrightness);
            Graphics.Blit(rt, downRT, TapMat); // First Downscale

            // Do Bloom
            Graphics.Blit(downRT, bloomRT, BloomMat); // Prepare for Bloom

            advMat.SetFloat(ShaderID.OffSetID, 0.55f);
            Graphics.Blit(bloomRT, bloomRT2, advMat, 1); // Blur Bloom #1

            RenderTexture.ReleaseTemporary(bloomRT);
            bloomRT = GetTemporaryRenderTexture(capturertdesclow);
            Graphics.Blit(bloomRT2, bloomRT, advMat, 3); // Blur Bloom #2 : bloomRT is final bloom rt

            RenderTexture.ReleaseTemporary(bloomRT2);
            bloomRT2 = GetTemporaryRenderTexture(capturertdesclow);
            advMat.SetFloat(ShaderID.OffSetID, 1.5f);
            Graphics.Blit(bloomRT, bloomRT2, advMat, 5); // Blur Bloom #3

            RenderTexture.ReleaseTemporary(bloomRT);
            advMat.SetFloat(ShaderID.OffSetID, 2.5f);
            bloomRT = GetTemporaryRenderTexture(capturertdesclow);
            Graphics.Blit(bloomRT2, bloomRT, advMat, 5); // Blur Bloom #4

            RenderTexture.ReleaseTemporary(bloomRT2);

            // Do DOF
            advMat.SetFloat(ShaderID.OffSetID, 1.55f);
            Graphics.Blit(downRT, downRT2, advMat, 0); // Blur #1 for dof

            RenderTexture.ReleaseTemporary(downRT);
            downRT = GetTemporaryRenderTexture(capturertdesc);

            Graphics.Blit(downRT2, downRT, advMat, 0); // Blur #2 for dof

            RenderTexture.ReleaseTemporary(downRT2);

            Graphics.Blit(downRT, null as RenderTexture, advMat, finalPass);

            RenderTexture.ReleaseTemporary(rt);
            RenderTexture.ReleaseTemporary(bloomRT);
            RenderTexture.ReleaseTemporary(downRT);
*/
2 Likes

Then I blame these lines from the docs, they really threw me for a curve:

"Memoryless render textures are temporarily stored in the on-tile memory when it is rendered. It does not get stored in CPU or GPU memory. This reduces memory usage of your app but note that you cannot read or write to these render textures.
On-tile memory is a high speed dedicated memory used by mobile GPUs when rendering."

I very wrongly thought this meant it’s the backing cache/buffer for the blit to the screen, and couldn’t be used because they’re saying “but note, you cannot read or write to these render textures”.

1 Like

Here’s the other part of my stupidity… I thought the two bits of work above wouldn’t be necessary with the memoryLess “renderTexture” because I thought it was the “backbuffer” (tiles) that would be blit to the screen, anyways, and that that was the reason you couldn’t write into these… but then I couldn’t figure out how these could be a thing, either because I couldn’t figure out how a camera could write into them when the docs say:

“but note, you cannot read or write to these render textures”.

Does this mean the docs are somewhat wrong?

Maybe also read Apple’s description.

Also I think the confusion is, I think the manual doesn’t mean the rendertexture as a whole, just the “attachments” as Apple calls them, that you set as memoryless.

Cheers.

I had read that a few hours ago. And it was part of the excitement about the potential speed of this thing.

I want to throw a LOT of particles in it.