Is Unity culling verts before vertex shader?

I am attempting to implement a fisheye style lens.

Basically I project each vertex onto a unit sphere around the camera. Then I move the camera to the back of that unit sphere.

If every vertex was in front of the camera, it would get projected onto the forward hemisphere, and a 90° camera angle would render it onto a circle that touches the corners of the window.

Objects that would have been out of view will get squashed towards the centre.

Here is my shader code:

Shader "Pi shader"
{ 
   Properties {
      _Color ("Diffuse Material Color",          Color  ) = ( 0., 1.,0. ,1. ) 
      _DidHit( "bool whether focalpoint exists", Int    ) = 0
      _Focus ( "focal point (worldSpace)"      , Vector ) = ( 1., 0., 0. )
   }
      
   SubShader { 
      Pass {
         Tags { "LightMode" = "ForwardBase" } 
         
         GLSLPROGRAM // here begins the part in Unity's GLSL
         #include "UnityCG.glslinc"
         
         varying vec4 v2f_color;
         
         #ifdef VERTEX // here begins the vertex shader
 
         uniform vec4 _Color;
         uniform vec4 _Focus;
         uniform int  _DidHit;
         uniform vec4 _LightColor0; 
         
         
         void main()
         {
         	vec4 pos_camSpace = gl_ModelViewMatrix * gl_Vertex;
         	vec4 focus_camSpace = gl_ModelViewMatrix * _Focus;
         	
         	vec4 X = pos_camSpace;
         	
         	// hit-test unit sphere, i.e. normalize X       	
         	vec3 X_unitSphere = normalize( vec3(X) );
         	
         	// now project from back of sphere, i.e. (0,0,+1) as camera looks down -ve z
         	// Moving the camera back by camera.z += 1.0 should be equivalent to keeping the at 0, 
         	//    and moving all the points away by v.z -= 1.0
         	vec3 X_proj = length(X) * normalize( vec3( X_unitSphere.x, X_unitSphere.y, X_unitSphere.z - 1.0 ) );
         	
         	
         	gl_Position = gl_ProjectionMatrix * vec4( X_proj, 1.0 );
         	
			// do light calc in world-space
			vec4 normal_4 = vec4(gl_Normal, 0.0);
            vec3 normal_ws = normalize( vec3( normal_4 * _World2Object ) );
            vec3 object_to_cam_ws = normalize( vec3(pos_camSpace) );
 
            vec3 diffuseReflection = vec3(_LightColor0) * vec3(_Color) 
                     * max(0.0, dot(normal_ws, object_to_cam_ws));
 
            v2f_color = vec4(diffuseReflection, 1.0);
         }
         #endif
         
         #ifdef FRAGMENT
         void main() { gl_FragColor = v2f_color; }
         #endif
 
         ENDGLSL
      }
   }
}

It seems to correctly warp all objects that would have been within the original 90° camera angle.
But all objects that would have been outside of this frustrum fail to render.

As I slide the camera’s FoV from the inspector, peripheral objects pop in and out of existence.

As you can see, FoV=95.8 → 95.9 makes one object at here. But no intermediate value makes part of that object appear.

I think that frustrum culling happens between the vertex and fragment shader… Also triangles partially offscreen get clipped.

Maybe there is a bounding box around the object, at this is checked by Unity against the camera’s FoV, and the render is just switched off it is determined to be off-screen…?

Is there anyway to get round this?

I guess I could have a 179° FoV camera, and then manually construct my own projection matrix.

Does this make sense?

Frustum culling happens before the object is rendered. It doesn’t happen between the vertex and fragment shader. And it is based on the bounding box. It’s not directly based on the FoV, but on the actual projection matrix.

As far as I know you can’t disable frustum culling, so you’ll have to trick by adjusting the projection matrix or the object bounds:
http://forum.unity3d.com/threads/71051-Disable-Frustum-Culling

Thanks,

	void Awake () {
		Mesh mesh = GetComponent<MeshFilter>().mesh;
		mesh.bounds = new Bounds( Vector3.zero, 1000f * Vector3.one );
	}

attached to each object fixes it.

Does this mean that frustum culling is not part of the GL pipeline?

I’m looking at http://http.developer.nvidia.com/CgTutorial/cg_tutorial_chapter04.html

It says that the projection matrix goes from eye-space into clip space, by applying the projection matrix.
I think that triangles not completely contained within the bouncing cube in clip space get truncated/culled between the vertex and fragment shader.

(x,y,z each in the range [-1,+1] in clip-space means triangle is completely contained)

But this must be something completely different from ‘Frustrum Culling’ which must take place before the shader gets a chance to act.

Yeah this is “clipping”, happens after the vertex stage and before other stages such as fragment IIRC. So if you do fish-lens by having vertex shaders reposition vertices for all geometry (another approach if you have Pro would be a post-processing Image Effect such as the built-in one) then this shouldn’t affect you. The job for vertex shaders is to transform incoming object-space vertices into clip-space 3D positions ranging from -1…1 in all 3 dimensions. This makes it very easy for the GPU to discard all data clearly outside clip-space as this would never show up on screen and shouldn’t be fragment-shaded or otherwise processed further. Etc etc :wink: So clipping happens after your vertex shader has completed all its transformations, whether that be a standard model-view-projection transformation or something more exotic than that.

Pixels outside of the -1 to 1 range in any dimension in clip space will of course be discarded. You will never notice that for the first two dimensions, since it means the pixel is outside the viewport. In the third dimension you can notice it in the form of the near and far clip planes. So if this is visible, it’s the z coordinate that causes the issue.