Here is the “Texture Coordinates” shader example taken from:
All I have done is set the Blue channel on my frag output to “i.position.y”, and suddenly the shader won’t work in D3D9 and gives the previously stated error.
If I change my build settings to D3D11, but still leave the #pragma target 3.0, it suddenly works, despite still targeting the same shader model. This seems very odd.
If this is intended, could someone share with me the proper method to handle this?
I was doing:
float2 sSpace = (i.position.xy / _ScreenParams.xy);
For my screen space effects and it works perfectly when in D3D11 mode, but if it’s not going to work in D3D9, I need to figure out a different solution, thanks!
The code you have there compiles to hlsl, so it is on Unity’s side, probably some issue with translation to glsl? Try the dx9 “VPOS” instead of “SV_POSITION”… If that does not work, you’ll just have to copy the position to a separate texcoord in the vertex shader.
As far as I know, you can’t use the clip space position in the pixel/fragment shader. So the copy to another float4 is correct. You have to imagine it’s actually like this:
The position is defined on some platforms, like d3d11 (and d3d9, or at least, the hlsl docs say so, not sure about other platforms). The difference between passing your own clip space position and reading from SV_POSITION is that the latter already has the perspective division applied (pos.xy / pos.w) and is scaled to (0, size) instead of (0, 1) or (-1, 1), which is what you’re seeing in the left screenshot. The solution is to normalize the coordinates back by dividing them by _ScreenParams.xy and then offsetting them to whatever range you need.
Thanks for the help. There is still something strange going on that is at the root of my problem, DX9 and DX11 mode seem to produce different results with the same code. Here is jvo3dc’s code, unaltered, in both modes. Why is the output different in each?
edit:
After some experimentation I discovered something odd. In DX9 mode, If I use the “texcoord0” in my return instead of the “texcoord1” that we passed position to, I get the exact same gradient result as the DX11 mode. This is really strange…
And if I do that in DX11 mode, it gives me the same pure-dark-blue result that happens if I had done an
“i.texcoord1.xy / _ScreenParams.xy”… As if in DX11 mode it’s already doing that automatically to the texcoord0, this is weird.
Though i’ve yet to find a way to replicate in DX9 the pure Yellow result that directly passing the position does in DX11 mode.