Built-in Render Pipeline Deferred G-buffer Normals Encoding

I’ve written a Deferred shader to draw custom geometry onto the g-buffer in the Built-in Render Pipeline. However, something seems to be off about the normals (RT 2) I am writing. I am writing world space normals as the Unity docs seem to indicate the normals are in world space Unity - Manual: Deferred Shading rendering path

However, when looking at the normals g-buffer, RT 2, in the Frame Debugger, it seems like the Unity normals aren’t exactly in World Space. Here is an image of normal buffer, the scene is a default Unity plane, with a Unity cube on it and then my custom cube on the very top. Everything is axis aligned, so like you can see in my custom geometry, I would expect the World Space normals to be something like (1, 0, 0) or (0, 0, 1), but based on the bottom cube, that is not how Unity formats the normals for its regular geometry in this buffer.

I suspect I need to actually write some sort of encoded version of the world space normals to this buffer instead of the raw world space normal, but I can’t find any documentation on what that encoding function is. Could anybody tell me how I can convert my world space normals to be in the format that Unity expects? Or is there possibly something else causing this mismatch?

Unity’s gbuffer is a R10G10B10A2_UNORM, meaning it stores a value between 0.0 and 1.0 per component, and you need to scale the -1.0 to +1.0 range of the normal vector to a 0.0 to +1.0 range, like a tangent normal map.

That was my issue, thanks! Does ARGB2101010 typically imply that it is UNORM? Or is the fact that it is UNORM separate from that specification?

Unity’s RenderTextureFormat.ARGB2101010 format happens to be a UNORM. And you can’t change Unity’s gbuffer formats, apart from the emission / accumulation buffer which is controlled by the HDR setting on the camera / project.

There are signed versions of A2R10G10B10 that GPUs support, but only SINT and the super weird XRSRGB which I think is intended for display output on high color precision displays. Not SNORM or SFloat. It’s a weird format, but useful because it’s a lot more precision than traditional R8G8B8A8 while still only being 32 bits.