Rendering two cameras in one final result

Im trying to do something like the volumetric particle effect mentioned here.

http://www.inframez.com/events_volclouds_slide32.htm

Basically what it says is that you can render certain elements from a camera blur it and then apply it on top of the game camera. However if you just apply the blur post processing shader to a camera with a different depth (like you in an fps) it blurs the entire screen. So you have to render that to a render texture instead and use a post processing shader to composite it.

The problem is Im a complete “noob” in post processing shaders, how can I create one that renders the result of both cameras in a single one? I already a camera that is rendering the particles and blurring them (I could use a depth buffer in it too) but I have no idea how to composite that with the game camera.

hi, i can’t post any source code but maybe give a (hopefully useful) direction.

the big problem in doing the stuff in the slides is that it’s not that easy in unity “to just keep the depth buffer but render into a new color buffer”. a new camera either has a new targetTexture or reuses the default one. however you can do workaround things like saving the color buffer into another render texture, clearing the screen and properly recompositing afterwards.

so have a look at the targetTexture of the Camera component. creating original RenderTextures for the targetTexture along with a final Composite post effect where you sample the original (as copied before) and the particles targetTexture. Then combine both based on the mask. if you’re a little familiar with shaders, this should be the smallest problem ;-).

Thanks! the problem is Im really not versed at post processing shaders at all. Actually Im still looking for a step by step tutorial on that subject.

Im half way though I have a 512 * 512 render texture of the second camera with a blur shader in it, but I dont know how to composite that texture with my current camera.

Here is my first try at the composite shader,

Shader "Composite" {
	Properties {
		_MainTex ("Base (RGB)", 2D) = "white" {}
		_MyTexture ("Texture", 2D) = "white" {} 
		_MyAlpha ("My Alpha", Float) = 0.5
	}
	
	SubShader {
		pass{
			ZTest Always Cull Off ZWrite Off Lighting Off
			Fog { Mode off }
		
			CGPROGRAM
			#pragma vertex vert_img
			#pragma fragment frag
			#pragma fragmentoption ARB_precision_hint_fastest 
			#include "UnityCG.cginc"
	
			uniform sampler2D _MainTex;
			uniform sampler2D _MyTexture;
			float _MyAlpha;
	
	
			float4 frag (v2f_img i) : COLOR {
				float4 c = tex2D(_MainTex, i.uv);
				float4 d = tex2D(_MyTexture, i.uv);
				c.r = c.r*_MyAlpha+d.r*(1-_MyAlpha);
				c.g = c.g*_MyAlpha+d.g*(1-_MyAlpha);
				c.b = c.b*_MyAlpha+d.b*(1-_MyAlpha);
				c.a = 0.5;
				return c;
			}
			ENDCG
		}
	} 
Fallback off
}

but for some reason is just making the screen whiter instead of revealing the texture.

Im applying it with the standard script for rendering camera effects

using UnityEngine;



[ExecuteInEditMode]

[AddComponentMenu("Image Effects/Composite")]

public class cameraeffectr : ImageEffectBase {

	public Texture2D texture;

	public float alpha=0.5f;

	

	void OnStart(){

		material.SetTexture("_MyTexture", texture);

		material.SetFloat("_MyAlpha", alpha);

	}	

	// Called by camera to apply image effect

	void OnRenderImage (RenderTexture source, RenderTexture destination) {



		Graphics.Blit (source, destination, material);

	}

}

Any ideas what am I doing wrong?

oh , i’m learning it now .