Avoiding GrabPass?

Hey, all,

I’ve written a multipass shader that uses the results of the first pass as the source for the next pass. The issue is, in order to do this, at the moment, I’m using GrabPass. This has two key issues:

  1. It’s slow.
  2. It grabs the whole screen.

I want neither of these things! :slight_smile: I just want to be able to use the image generated on the surface from the first pass so I can use it in the second pass.

Is this possible, and if so, how?

Thanks, in advance…

SB

1 Like

What do you want to do exactly? Could it be done in just a single pass using a CG program?

Nope - the code is already in CG, and I’ve run out of registers to do it in a single pass… :frowning:

Can you create a camera do a RenderWithShader or Graphics.DrawMesh to specific rendertexture using the first pass. Probably the later would be more efficient. Then set that rendertexture using Shader.SetGlobalTexture or yourMaterial.SetTexture and render the second pass?

Grabpass is quite slow because it actually does some pixel copy operations on the cpu.

Well, this uses a render texture to begin with (it grabs from a second camera as the source texture), so thatd be 2 render textures just to do this one effect?

There’s really no way to just drop the output of the first pass into the second? The only reason I really don’t want to use getpass is the fact that it grabs the whole screen, not just the previous shader pass.

Does no one else think that’s a fairly major limitation?

Also, I’m not really sure I understand how I’d use your suggestion (sounds like you’re saying I should split the shader in two?) - any chance you could walk me through it?
Thanks! :slight_smile:

Well, the whole screen is there anyway. It’s not like it’s re-rendering everything again for the grabpass, might be a bit slower now it’s copying the full frame buffer but I don’t imagine by a staggering amount or it’d be a silly move on Unity’s part.

If it’s that much of a requirement perhaps drop back to the last version that had grabpass capture only the screen-space bounding box of the mesh.

What effect are you trying to do, exactly? Perhaps we can suggest a better way or suggest optimisations.

It’s just a special effect which requires the splitting of RGB channels, with the addition of various alpha passes (combining moving textures along the way). This is to create an effect on the output of a second camera, which is used as an image in the main view.

GrabPass isn’t usable for what I want. The game I’m writing is pretty simple, graphically, but it needs some nice effects (like the one I describe), so I could take the hit on the GrabPass command, speed wise.

The issue comes from the fact that GrabPass grabs screen space, not texture space, so if I’m using a render texture to process an image grabbed from a second camera, then applying that rendertexture to an object in the scene (main camera), using GrabPass then grabs the whole screen, which includes everything rendered including stuff I don’t want to be in the second pass (like the background, for example).

If GrabPass worked in Texture Space, I’d have no issues. I’d also have no issues if I could pass the output from one pass into the second pass. I’m struggling to believe that there’s no way to do this??

Oh, you mean you essentially want to bake out a texture?

There are ways… have a shader that outputs UV coordinates rather than vertex coordinates as the sv_position, do the base pass you want with your shader, grabPass that and then use that grabPass as a regular texture in the next pass.

Might need to be done on a second camera to ensure it’s not drawing on-screen during the game…

Well, that’s it - that’s pretty much what I’m doing - like so:

(abstracted - code not with me at the moment):

subshader
{
     pass
     {
          CGCODE
     }
     GrabPass{}
     pass
     {
          CGCODE
     }
}

The issue being that the first pass occurs, is rendered to the texture, which is applied to a model in the scene, then GrabPass occurs, grabbing EVERYTHING in the scene (including the background etc), and the second pass then works on the captured texture.

All I want to do is to take the texture output of Pass 1 and feed it into Pass 2, unpolluted by anything else.

Well, GrabPass won’t grab in texture space, it just grabs the framebuffer as it currently stands. So it sounds like GrabPass won’t do what you want.

Perhaps try using a camera with a replacement shader that renders the object to UV coordinates to a render texture, then you can use that render texture in your regular scene as a normal UV mapped texture?

So for the render texture pass you probably want o.pos to be something like
o.pos = mul(UNITY_MATRIX_MVP, float4(v.texcoord.x, v.texcoord.y,0,0); // Might have to do some normalizing to get it to draw full-screen.

Then in your regular scene, just use the render texture like a regular texture.

So there’s really no way to actually pass the output of one pass to another?

Not in texture space, no.

Other than what I described, at least.

If you want the output available in a shader register then you’ll need to use an intermediate texture (either a RenderTexture or by using GrabPass). However you can access the current framebuffer contents at the alpha blend stage using DstColor, DstAlpha, OneMinusDstColor or OneMinusDstAlpha.

OK - so I can add a second rendertexture to this and ‘ping-pong’ the image; How would I go about that?

I had similar problems with my shaders in a research project, where I have to visualize special data consisting of several texture data and a lot of parameterization, all done in fragement shaders. When calculations got too complicated, I ran out of registers too.
So this might be an option, you probably not thought about so far: Unity uses SM2 by default (at least it did for me). After changing to SM3, the number of registers raised dramatically and gave me enough room for my stuff.

R-Type - that’d be great, but I’m writing this for iOS, so I’m not sure that’d work. If it’d work on certain iPhones and not others, I’d be OK with that, but as I understand it, iOS is SM2 only?

My guess is you can get all this done in one pass. Pick your worst target hardware, and try it out. iOS is nice because there are very few devices, and their graphics horsepower only increases from one generation to the next.

Don’t know about iOS. I’m working with D3D on Windows. Maybe you just give it a try and see how OpenGL compiles …

Daniel - the big problem is that the second pass is a blur, meaning in the vertex code, I’ve got to essentially grab 4 lots of uv’s for even the most simple effect. In addition to this, the previous pass uses 4 textures, two of which it splits into separate (and moveable) RGB layers, so I’m pretty sure it can’t be done in one pass. If you have a lightning fast way to do a blur that wouldn’t use 4 sets of uv’s, I’d love to hear it. Unfortunately, iOS doesn’t support tex2DARRAY, so I can’t grab surrounding pixels all at once…