Hello everyone, recently I’ve started using amplify shader graph and learning about the post processing stack. For my project my artist asked me if I could get the depth and normal texture from the camera view. I’ve done quite a bit of research and found out about template shaders and exposing information on the graph from the shader. One last wall that I seem to have slammed into though is the normal texture of the camera.
What I have found is that depth data is saved in _CameraDepthTexture (the r channel) and that normal data can be found encoded in the _CameraDepthNormalsTexture global variable r and g channels. However, I can’t seem to find anything about just the normal channel texture or parseing out the normal texture from _CameraDepthNormalsTexture.
To clarify, I need the normal texture in order to expose it in the amplify gui. So I’m not trying to get values on a per pixel basis using the tex2d and DecodeDepthNormal functions. Anyone have any insight to this dilemma?
I’m not entirely sure what you’re looking for. From your description I’m not even sure you know what you’re looking for.
You mentioned your artist asked for the camera view depth & normals. Were they asking for the depth and normals from the scene’s opaque objects (which is what the _CameraDepthTexture and _CameraDepthNormalsTexture store), or were they looking for the depth and view space normal of the current object being rendered?
If it’s the later you need to add a normal texture as a property, sample like any other texture, but with Unpack Normal Map checked on for that Sample node, and then you can transform it into world space using the World Normal node, and into view space with a Transform Direction node set to world to view. That’ll get you the same normal vector for that object that would otherwise be encoded in the _CameraDepthNormalsTexture if it were opaque. For depth use the Surface Depth node.
Based on your desicription, I need the normal from the scene’s opaque objects. But I need a normal map by itself, _CameraDepthNormalsTexture stores both and I want to get just the normal texture out of it.
Yeah, I don’t understand what you’re asking for. A “normal map” and “normal texture” are different ways of saying the same thing.
A “normal map” is any texture that holds normal data. Most commonly this is a “tangent space normal map texture”, but the world space normals in a deferred gbuffer, or the sterographic encoded view space normals in the _CameraDepthNormalsTexture both also count as “normal maps”. The original objects’ tangent space normal maps are themselves no longer accessible from either of those textures as they’re new textures that are storing normal data per-pixel with no connection back to the original mesh.
If you’re looking to get the original tangent space normal map of some object that’s not the object you’re currently rendering … well you need to do that manually by setting a normal map on the material of the object, either finding the texture asset by hand or via script.
Ok, what I want is not the normal data encoded in _CameraDepthNormalsTexture. How do I extract that normal texture out of _CameraDepthNormalsTexture? I can;t find any info on that topic.
Why? It already is a normal texture, use it as is.
If you’re going to extract it and store it in a different encoding (like color = normal * 0.5 + 0.5 like a tangent space normal map) you’ll only be loosing information.
Then perhaps this is a deficit of information on my part, but all I’ve read is the normal map data is saved in the red and green channels. I don’t understand how this information should than be interpreted like I do with a normal map. So can you explain how this data is then interpreted?
Your basic normal vector is a unit length (length(normal) == 1) vector that represents a direction in some space. The common spaces are world, object, view, and tangent. The most common normal maps are tangent space normal maps, as mentioned above. These are normal maps which store normals in the space relative to the interpolated surface normal and UV orientation.
I go deeper into explaining tangent space normal maps here:
The most commonly used compressed texture formats are limited to 0.0 to 1.0 value representations, usually stored with 8 bits per channel, or 256 possible values (the familiar 0 to 255 values you often see RGB colors shown as). To map a tangent space normal, which has 3 components that range from -1.0 to 1.0 into a texture that can only store values between 0.0 and 1.0 the easiest solution is to remap the range with the fore mentioned normal * 0.5 + 0.5 math. Tangent space normal maps have the added benefit of every normal is guaranteed to be facing away from the surface normal and never back towards it. So it’s common for tangent space normal maps to only store the x and y (red and green) and reconstruct the z. Since a normal is always a unit length vector, this is easy enough to do with Pythagorean theorem. Unity’s default normal maps for PC and consoles do this for reasons I’m not going to go deep into right now.
The normals in the _CameraDepthNormalsTexture are also only xy values, but they are not just the xy values of the normal, they are a different encoding. Specifically it is a stereographic encoding. The reason for this is because view space normals aren’t guaranteed to be facing the view plane. This means you can’t rely on reconstructing the z from the xy since the z might be negative, but a reconstructed z will always be positive. Stereographic encoding is storing something more akin to “how far around” a point on a sphere is is rather than a direction, with some arbitrary scaling factor for the max “how far” can be so it’s not the full sphere. The resulting value is between -infinity and +infinity, which is being limited to -1.0 to 1.0 and rescaled with the usual * 0.5 + 0.5 for storage in an 8 bit per channel texture.
Now you could decode the normal stored in the _CameraDepthNormalsTexture, which is an RGBA32 texture (8 bits per channel), and re-encode it into a view or world space RGB24 using more tradtional * 0.5 + 0.5 encoding alone, but in doing so you’ll only loose information. Both formats are 8 bits per channel, and while you’re using 24 bits (3 channels) instead of the original 16 bits (2 channels), you’ll only be loosing information due to aliasing between the data precision as the stereographic encoding has more precision facing towards the camera and less facing away, traditional normal map encoding has roughly equal precision in all directions.
The TLDR is use that texture as is, and use DecodeDepthNormal() to get the view space normal.