Anti-Aliasing causes post processing effects to flip.

Our current project is using a Post Processing Outline Shader. We have imported the Standard Assets and the Pro Standard assets to fix the original upside down errors introduced by 2.6.

However, if Anti-Aliasing is turned on now, our outline flips upside down. How do I stop the outline from flipping upside down when Anti-Aliasing is turned on?

EDIT : Here is the shader in question. To generate the Depth/Normals texture I'm using a very slightly modified version (just turned off culling on Transparent Cutout RenderType) of the "Camera - DepthNormalTexture" provided in the Built In shader package from the Unity site

Shader "Hidden/Edge Detect Normals XNA" {
Properties {
    _MainTex ("", RECT) = "" {}
    _DepthNormalsTexture ("DepthNormalsTexture", RECT) = "" {}
    _NormalThreshold ("NormalThreshold", float) = 1.0
    _DepthThreshold ("DepthThreshold", float) = 1.0
    _NormalSensitivity ("NormalSensitivity", float) = 1.0
    _DepthSensitivity ("DepthSensitivity", float) = 1.0
    _EdgeIntensity ("EdgeIntensity", float) = 1.0
    _EdgeWidth ("EdgeWidth", float) = 1.0
    _ScreenHeight ("", float) = 1.0
    _ScreenWidth ("", float) = 1.0
}

SubShader {
    Pass {
    	ZTest Always Cull Off ZWrite Off
    	Fog { Mode off }

CGPROGRAM
#pragma target 3.0
#pragma vertex vert
#pragma fragment frag
#pragma fragmentoption ARB_precision_hint_fastest 
#include "UnityCG.cginc"

uniform samplerRECT _MainTex;
uniform samplerRECT _DepthNormalsTexture;

uniform float4 _MainTex_TexelSize;
uniform float4 _DepthNormalsTexture_TexelSize;
uniform float _NormalThreshold;
uniform float _DepthThreshold;
uniform float _NormalSensitivity;
uniform float _DepthSensitivity;
uniform float _EdgeWidth;
uniform float _EdgeIntensity;
uniform float _ScreenHeight;
uniform float _ScreenWidth;

struct v2f {
    float4 pos : POSITION;
    float2 uv : TEXCOORD0;
};

v2f vert( appdata_img v )
{
    v2f o;
    o.pos = mul (glstate.matrix.mvp, v.vertex);
    o.uv = MultiplyUV( glstate.matrix.texture[0], v.texcoord );
    return o;
}

float4 frag (v2f i) : COLOR
{   
    float4 original = texRECT(_MainTex, i.uv); 
    float2 edgeOffset = _EdgeWidth / float2(_ScreenWidth, _ScreenHeight);
    float2 offset = float2(1,1) * _DepthNormalsTexture_TexelSize.xy;
    float2 invOffset = float2(-1,1) * _DepthNormalsTexture_TexelSize.xy;
    // 4 samples from normals+depth buffer
    float4 normalD1 = texRECT(_DepthNormalsTexture, i.uv - offset);
    float4 normalD2 = texRECT(_DepthNormalsTexture, i.uv + offset);
    float4 normalD3 = texRECT(_DepthNormalsTexture, i.uv + invOffset);
    float4 normalD4 = texRECT(_DepthNormalsTexture, i.uv - invOffset);

    float3 normal1;
    float depth1;
    float3 normal2;
    float depth2;
    float3 normal3;
    float depth3;
    float3 normal4;
    float depth4;

    // Decode normal/depth data
    DecodeDepthNormal(normalD1, depth1, normal1);
    DecodeDepthNormal(normalD2, depth2, normal2);
    DecodeDepthNormal(normalD3, depth3, normal3);
    DecodeDepthNormal(normalD4, depth4, normal4);	

    // Work out how much the normal and depth values are changing
    float4 diagonalDelta = abs(float4(normal1, depth1) - float4(normal2, depth2)) + abs(float4(normal3, depth3) - float4(normal4, depth4));

    float4 normalDelta = dot(diagonalDelta.xyz, 1);
    float depthDelta = diagonalDelta.w;

    // Filter out very small changes, in order to produce nice clean results
    normalDelta = saturate((normalDelta - _NormalThreshold) * _NormalSensitivity);
    depthDelta = saturate((depthDelta - _DepthThreshold) * _DepthSensitivity);

    // Does this pixel lie on an edge?
    float edgeAmount = saturate(normalDelta + depthDelta) * _EdgeIntensity;

    original *= (1 - edgeAmount);
    return original;
}
ENDCG
    }
}

Fallback off

}

Aras is right, but for some strange reason the Shader Replacement project didn't get updated on our website - this will be fixed asap.

In the meantime: this is what the Edge Detection example needed in it's vertex shader:

// On D3D when AA is used, the main texture & scene depth texture
// will come out in different vertical orientations.
// So flip sampling of depth texture when that is the case (main texture
// texel size will have negative Y).
#if SHADER_API_D3D9
if (_MainTex_TexelSize.y < 0)
	uv.y = 1-uv.y;
#endif

And it is also what your shader needs.

The reason for this inconvenience is that D3D and OpenGL interpret Y coordinate differently. We are handling that in Unity behind the scenes, so that the user doesn't have to worry about it, but with anti-aliasing on D3D it would force us to blit the entire image upside-down, which is expensive. It's a lot cheaper to just flip the Y coordinate as the example shows.

What is the "Post Processing Outline" shader? Making Image Effects + FSAA work can be quite involved. Most of the times it all works, except when it does not.

Take a look at Edge Detection sample in Shader Replacement project for an example where solution is more involved.

I'm finding when I use the suggested fix

// On D3D when AA is used, the main texture & scene depth texture
// will come out in different vertical orientations.
// So flip sampling of depth texture when that is the case (main texture
// texel size will have negative Y).
#if SHADER_API_D3D9
if (_MainTex_TexelSize.y < 0)
        uv.y = 1-uv.y;
#endif

That the result isn't the same as when AA is off, what happens is the result is slightly offset in Y. Any thoughts?