problem with using a cubemap in an image effect

I’m writing my first image effect, there seems to be shortage of info on that, but I got the basics working.
Anyway, I want to access the skybox in the shader, I did this by making a cubemap of the skybox which I sample from inside the shader. This should work but the result is distorted, and doesn’t matches the actual skybox.

Because it’s the first time I write an image effect from scratch, I made it first as a normal shader, and that works perfectly:

Shader "Custom/SpaceHole" 
{
    Properties 
    {
      _Cube ("Cubemap", CUBE) = "" {}
    }
    SubShader 
    {
		Tags { "RenderType" = "Opaque" }
      
      	Pass
        {
			CGPROGRAM
			#pragma vertex vert
		    #pragma fragment frag
	        #include "UnityCG.cginc"
	      
		    struct v2f 
			{
				float4 pos : POSITION;
		        float3 viewDir : TEXCOORD1;
			};
			
			samplerCUBE _Cube;
				
			v2f vert( appdata_base v ) 
			{
				v2f o;
				o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
				o.viewDir = WorldSpaceViewDir(v.vertex);
				return o;
			} 
			
			half4 frag(v2f i) : COLOR 
			{
				float3 temp = i.viewDir;
		    	temp.y = -temp.y;
		    	temp.z = -temp.z;
				float4 overlay = texCUBE (_Cube, temp);
				return overlay;
			}
		      
			ENDCG
		} 
	}
	    
	Fallback "Diffuse"
  }

I’m using the exact same code in my post effect, but I get a different result when sampling from the cubemap.
Anyone any idea why that is and how to fix it?

This is the image effect:

Shader "Custom/AtmosphereOverlayShader" 
{
	Properties 
    {
		_MainTex ("Base", 2D) = "" {}
		_Cube ("Cubemap", CUBE) = "" {}
    }
	
	CGINCLUDE
	
	#include "UnityCG.cginc"
	
	struct v2f 
	{
		float4 pos : POSITION;
		float2 uv : TEXCOORD0;
        float3 viewDir : TEXCOORD1;
	};
	
	sampler2D _MainTex;
	samplerCUBE _Cube;
		
	v2f vert( appdata_img v ) 
	{
		v2f o;
		o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
		o.uv = v.texcoord.xy;
		o.viewDir = WorldSpaceViewDir(v.vertex);
		return o;
	} 
	
	half4 frag(v2f i) : COLOR 
	{
		//float4 color = tex2D (_MainTex, i.uv);
		float3 temp = i.viewDir;
    	temp.y = -temp.y;
    	temp.z = -temp.z;
		float4 overlay = texCUBE (_Cube, temp);
		return overlay;//color*overlay;
	}

	ENDCG 
	
	Subshader 
	{
		Pass 
		{
		  ZTest Always Cull Off ZWrite Off
		  Fog { Mode off }      
	
	      CGPROGRAM
	      #pragma fragmentoption ARB_precision_hint_fastest 
	      #pragma vertex vert
	      #pragma fragment frag
	      ENDCG
		}
	}

	Fallback off	
}

What exactly is going wrong?

I mean beyond the simple fact that a cubemap is no 2d texture and works very different and is used for totally different purposes. (it projects onto the point according the temp vector you push in onto the object while a 2d texture would simply take that as uv)

Well right now, my image effect should basicly just look like I’m watching the skybox, but it doesn’t, it looks as if if I look in -z direction, it looks ok, but in z direction it’s flipped, and inbetween it’s stretched.

Note that the calculations are exactly the same in my image effect shader, and the normal shader I posted here,
and it does work perfectly in the normal shader.

Edit:
This is how it looks like when using the normal shader on a prop:

my guess is that the problem lies with doing this: o.viewDir = WorldSpaceViewDir(v.vertex); in an image effect.
Since I don’t actually now where these vertices are.
My guess was that that these would be the vertices of a plane matching the near plane of the camera.

But I noticed that changing the fov didn’t change the result, which it should, so I’m starting to wonder what (and where) the vertices are.

The vertices simply form a plane which covers the entire screen, thats just it.

well yes, but where in world coordinates ?
I thought they would match the near plane, but then it should work,
but then I thought the plane might be centered on the camera, which I guess could cause problems with getting the viewdirection.
But I tried ofsetting them in the direction the camera is pointing (float3(0,0,1) in screenspace) which should fix that, but it doesn’t have any effect.

I noticed I get different results depending on the location of the camera (until now, I was using a stationary camera that I only rotated).

So how can the position of the camera change the viewdirection ?
WorldSpaceViewDir(v.vertex); schould give the same result regardless of the camera’s position (I get that v.vertex can be different, but the world space view direction should stay the same)

What do you want to achieve in the end?

more or less a distance fog that blends to the skybox (not exactly but close enough).

But I really don’t get it, it totally doesn’t behave like it should, there’s something weird about the vertices of image effects, but since it’s completely undocumented, I have no idea what’s wrong.

I just tried:
o.wpos = mul(_Object2World, v.vertex);
o.viewDir = o.wpos.xyz-_WorldSpaceCameraPos;
Which IMO should actually just work as it should be the view direction for this vertex.
But the result is still wrong and changes whit the camera position

That is the normal behaviour in gl as the camera and the dependent quad actually never moves, and when you rotate you direction will mess up because your trial shader is generic and compiles for all renderers.

But, i dont understand how that is related to distance fog and a cubemap.

You must calculate your fog according to pixels world position, not quads vertex positions.

I explained how to do the above here

At the end, it just doesnt have to be an image effect if all you want is the skybox to be effected by fog, you better do it with a custom skybox shader.

EDIT: If you dont want all the hassle, i have this image effect in my image effects pack in the store.

no, you missunderstood what I meant,
I don’t want to affect the skybox, I want a distance fog that blends to the skybox instead of solid color, so I sample from the skybox (actually a cubemap identical to the skybox) based on the viewdirection.

But I don’t really understand what you meant with “your trial shader is generic and compiles for all renderers”
what’s a trial shader ?

Sorry english is not my native language.

What i mean is, when dealing with scene objects everything works fine.
But when dealing with unitys camera quad, it is slightly different with directx and opengl, one is in range of 0 - 1 other is in -1 to +1 (Im not 100% sure, but this is how it works for me and in general), also the w component of vertex is not filled in in opengl but its 1 in directx.
So, when you do a texture projection, it is safer to do a calculation like this uv = float4((uvCoord.x) * 2 - 1, (uvCoord.y) * 2 - 1, depth, 1) which will automatically clamp to write coordinate stuff.

sorry, not your fault for misunderstanding, I just didn’t explain it clearly it guess,
english isn’t my native language either.

And thanks for trying to help,
In the end I fixed by just trying a different approach,
I’m guessing it just didn’t work because in the post effect WorldSpaceViewDir() and WorldSpaceCameraPos both just don’t give the correct results.
so now I pass camera info (transform, fov and aspect) to the shader, and I use that to calculate the viewdir, and that works perfectly.

Ooh I need that screenspace-viewDir calculation, how did you do it? :wink:

Well, this is an old thread, but a fairly large issue if you need view direction in image space. I’ve been trying to fix this over the past year, using field of view calculations, aspect fixes, etc. before coming upon a solution that works fully and is essentially a hack. Anyway, if you want to do this, you can use the following:

In your image effect script, add the following lines;

//This function returns the relative directions of each corner of the camera's perspective projection
void GetViewDir (out Vector3 tleft, out Vector3 tright, out Vector3 bleft, out Vector3 bright)    //0.5625
{
    /*float rfov = fov * Mathf.Deg2Rad;
    float rhfov = Mathf.Atan (Mathf.Tan (rfov / 2) * aspect);
    float hfov = rhfov * Mathf.Rad2Deg;*/
    tleft = Camera.main.ScreenPointToRay (new Vector3 (0f, 0f, 0f)).direction;
    tright = Camera.main.ScreenPointToRay (new Vector3 (Screen.width, 0f, 0f)).direction;
    bleft = Camera.main.ScreenPointToRay (new Vector3 (0f, Screen.height, 0f)).direction;
    bright = Camera.main.ScreenPointToRay (new Vector3 (Screen.width, Screen.height, 0f)).direction;
}

//Then, within your rendering function, where 'mat' is your material
void OnRenderImage (RenderTexture src, RenderTexture dest)
{
...
    Vector3 tleft;
    Vector3 tright;
    Vector3 bleft;
    Vector3 bright;
    
    GetViewDir (out tleft, out tright, out bleft, out bright);
    mat.SetVector ("_Left", tleft);
    mat.SetVector ("_Right", tright);
    mat.SetVector ("_Left2", bleft);
    mat.SetVector ("_Right2", bright);
...
}

Then, in your shader, use the following;

...
float3 _Left;
float3 _Right;
float3 _Left2;
float3 _Right2;

struct appdata
{
    float4 vertex : POSITION;
    float2 uv : TEXCOORD0;
};

struct v2f
{
    float4 vertex : SV_POSITION;
    float2 uv : TEXCOORD0;
    float3 viewDir : TEXCOORD1;
};

v2f vert (appdata v)
{
    v2f o;
    o.vertex = mul(UNITY_MATRIX_MVP, v.vertex);
    o.uv = v.uv;
    float3 left = lerp (_Left, _Left2, v.uv.y);
    float3 right = lerp (_Right, _Right2, v.uv.y);
    o.viewDir = normalize (lerp (left, right, v.uv.x));
    return o;
}
...

During this process, the main thing that I had issues with was Unity’s weird horizontal fov calculation, which isn’t standard. Anyway, this is now here for anyone that wants closure on the topic. Just note that this has to be normalized; if you want magnitude, use in combination with a depth texture.

1 Like