# Shader: get size of a pixel for orthographic camera

I’ve been working on a shader to create object outlines, and I can’t seem to figure out what the width/height of a single pixel is in projection space, i.e. after you’ve multiplied everything by MVP. What I would like to do is move each vertex out along a normal by some number of pixels.

So let’s say I have a simple 1x1 square quad. At each corner I have a normal, pointing out from the center of the quad. All the points and the normals are in 2D space relative to the camera - they quad is facing the camera straight on. I then take each normal and get it into projection space:

``````float3 norm = mul((float3x3)UNITY_MATRIX_MVP, v.normal);
float2 offset = norm.xy;
``````

Then, I want to take that offset direction and multiply it by, say, 2 pixels. So something like this, though this code isn’t working:

``````output.pos.xy += (offset * _NumberOfPixels) / _ScreenParams.y ;
``````

(note: output.pos is the vector pos after it’s converted to projection space)

A bit more information: this camera’s orthographic size changes over time. I’m rendering the output of the ortho camera to a RenderTexture, which has a known size in pixels.

I’m happy to write code to keep track of the camera size with a script, and pass it to a material variable, if someone can explain to me how to use them to determine pixel size in projection space.

I think that there has to be a global value for a pixel size, but otherwise you can always pass it in to your shader.

``````pixel.width = 1.0f / Screen.width;
pixel.height = 1.0f / Screen.height;
``````

simple enough.

EDIT:
Just found this: http://docs.unity3d.com/Documentation/Components/SL-BuiltinValues.html

``````float4 _ScreenParams :
x is current render target width in pixels
y is current render target height in pixels
z is 1.0 + 1.0/width
w is 1.0 + 1.0/height
``````

use unity_OrthoParams.xy

``````    // x = orthographic camera's width