I would like to know how can I convert default Tangent space Normals into View Space Normals.
Thank you very much !
I would like to know how can I convert default Tangent space Normals into View Space Normals.
Thank you very much !
The default normals you read directly from a vertex are not in tangent space, they are in model ie object space. To go from object space to view space you use the UNITY_MATRIX_MV matrix. So for a vertex shader it would be:
o.normalView = normalize(mul((float3x3)UNITY_MATRIX_MV, v.normal));
Or do you mean the normals from a normal map? Those are in tangent space, and are normally converted to world space by the matrix made from the normal/binormal/tangent passed from the vertex shader. You can either take that normal that you get in your fragment shader like this:
float3 localCoords = UnpackNormal(tex2D(_BumpMap, i.tex));
//technically should normalize these first
float3x3 local2WorldTranspose = float3x3( i.tangentWorld,
i.binormalWorld,
i.normalWorld);
float3 worldNormal = normalize(mul(localCoords, local2WorldTranspose));
and then transform that worldNormal into viewspace with
float3 viewNormal = normalize(mul((float3x3)UNITY_MATRIX_V, worldNormal));
or you can pass your originall norm/binorm/tangent in viewspace to begin with and save a step in your fragment shader.
Thank you for your answer @GGeff !
I was talking about normal map but I don’t understand your chapter about it.
From what I understand, in the basic behaviour, normal map is converted to world space using the float3x3 normal/binormal/tangent matrix. But how (vertex/fragment) ?
Concerning the viewspace, according to what you are saying there’s two way to do this but I don’t understand you example, I mean, what is the vertex and the fragment part ?
I would be glad if you could detail your answer.
Thank you very much.
You’ll have to refer to this Unity - Manual: Custom shader fundamentals for info on vertex/fragment shaders. Things get a lot more complicated when you write your own vertex/frag shaders. I’m not sure why you’d be worried about having normals in viewspace in the first place if you weren’t already using your own vertex/frag shaders though. You might want to explain more about what you are trying to do.
I am trying to create a refraction effect and from what I’ve seen, it gives better results when using normals in view space. For the moment, I am using Grabpass and a basic deformation of the uvs based on the normal map but this is not convincing for refraction.
Here’s what I found but I don’t understand everything. How is it different from what you explained ?
// vertex function
float3 binormal = cross( normalize(v.normal), normalize(v.tangent.xyz) ) * v.tangent.w;
float3x3 rotation = float3x3( v.tangent.xyz, binormal, v.normal );
o.viewDir.xyz = mul(rotation, ObjSpaceViewDir(v.vertex));
o.TtoV0 = mul(rotation, UNITY_MATRIX_IT_MV[0].xyz);
o.TtoV1 = mul(rotation, UNITY_MATRIX_IT_MV[1].xyz);
// fragment function
fixed3 normal = UnpackNormal(tex2D(_BumpMap, IN.uv));
half2 viewNormal = half2(1, 1);
viewNormal.x = dot(IN.TtoV0, normal);
viewNormal.y = dot(IN.TtoV1, normal);
You could use COMPUTE_VIEW_NORMAL…
Not sure if that works with normal maps though. What I would do:
Let me know if this works! or not…
Thank @FuzzyQuills , it gives nice results I need to make some test but could you explain what’s behind your code.
Why doing #define COMPUTE_VIEW_NORMAL normalize(mul((float3x3)UNITY_MATRIX_IT_MV, v.normal)) instead of UNITY_MATRIX_MV ?
Why doing IN.viewSpaceNormal + normal * 0.5f; ? By offset do you mean add ?
And what’s the difference between the previous code I posted ?
Thanks a lot !
Well, COMPUTE_VIEW_NORMAL is a built-in Unity function, and computes, er, view-space normals!
And yes, by offset, I did mean add! I might as well type an example to illustrate:
struct v2f {
fixed4 pos : POSITION; //don't ask why I use fixed point for this... :smile:
fixed2 uv : TEXCOORD0; //UVs. (Unless one needs a base diffuse texture, probably won't need this!)
fixed3 VSNormal : TEXCOORD1; //our view-space normal (V-S Normal)
fixed4 screenPos : TEXCOORD2; //used for GrabPass coord
};
v2f vert (appdata_tan v) {
v2f o;
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
o.uv = v.texcoord.xy;
o.VSNormal = COMPUTE_VIEW_NORMAL;
o.screenPos = ComputeScreenPos(o.pos);
return o;
}
sampler2D _RefrTex; //refraction texture
sampler2D _BumpMap;
fixed4 frag (v2f i) : COLOR {
fixed2 awesomePos = i.screenPos;
awesomePos += UnpackNormal(tex2D(_BumpMap, i.screenPos * fixed2(1,1))) * 0.5;
awesomePos *= fixed2 (1,1);
fixed4 col = tex2D(_RefrTex, awesomePos);
return col;
}
Note that this assumes you using a RenderTexture to do it: for GrabPass, the ScreenPos calculation might be different.
Also, there are bound to be small errors in this code, so let me know if something went wrong…
Thanks a lot @FuzzyQuills but you forgot to use the VSNormal
It’s a nice technique because you are using the vertex’s VSNormals with the normal map but it doesn’t give right result sometimes.
The best way would be to convert not the vertex normals but the normalmap normals to view space.