With these settings I’m still seeing extreme banding. Here’s a screenshot of what I’m looking at:
Notice the banding on the cone towards the bottom of the image.
Additionally, I’ve tried loading the raw image from disk and setting the texture on the material via C# with the exact same results. Looking through the forums I’ve seen people indicate that they have gotten perfectly smooth normal maps on reflective surfaces via C# but I have not had any luck with this. As a matter of fact I have not seen any solutions on the forums which I’ve been able to get to work in Unity 2017.3.
The normal map renders perfectly smooth in VRay and Modo so the normal map is perfectly fine.
This normal map was baked from geometry in Modo.
Is there any way possible to get a 16bit normal map to render in Unity without artifacts? This strikes me as a bit depth issue, I’m just not sure how to solve it.
Yes, this is a bit depth issue. Unity only supports 8 bits per channel image formats for normal maps imported via the built in texture import asset tools. Using raw image loading should solve the issue, assuming the resulting Texture2D is using TextureFormat.RGBAHalf, which is the uncompressed 16 bits per channel texture format, and the texture you’re loading from disk is in fact at least 16bits per channel as well.
Thanks for the response bgolus. So, I’ve tried to use RGBAHalf via C# and I’m getting exactly the same results.
Here’s the code I am using. Perhaps someone could point out what I need to change.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using System.IO;
public class normalImportClean2 : MonoBehaviour {
// Use this for initialization
void Start () {
byte[] fileData;
fileData = File.ReadAllBytes("C:/Users/me/Desktop/Plane_Normal_v2.png");
Debug.Log(fileData.Length);
Texture2D tex = new Texture2D(4096, 4096, TextureFormat.RGBAHalf, false, true);
tex.LoadImage(fileData);
Material[] mats = GetComponent<Renderer>().sharedMaterials;
mats[0].SetTexture("_BumpMap", tex);
}
// Update is called once per frame
void Update () {
}
}
You can’t use LoadImage() here. Even if your PNG file is 16 bits per channel, LoadImage() explicitly converts PNG data to an RGBA32 image (8 bits per channel) before it gets applied to the Texture2D, regardless of the format of that target Texture2D.
You must load an uncompressed image format, like TIFF or some other RAW type, and use Texture2D.LoadRawTexureData() to apply the bytes to the image. You also need to strip the header from the loaded bytes so the data is only the pixel raw colors.
Beyond this I can’t really help you as Ive never myself actually gotten this to work.
Unity technically added support for importing 16 bit depth textures, but last I checked it only works with EXR files and always assumes it’s HDR color images so it can’t be easily used to import high precision data like normal maps.
Let’s assume I’m able to get a raw 16bit png or tif to load and I have it’s raw bytes. In order to use this image as a normal map on Windows will I need to swap Green Channel to Red and Red to Alpha and remap the values -1 to 1 in order for Unity to use it as a Normal Map?
Ok, so one thing I have discovered that works is if I import my normal maps as 16bit EXRs the RGBAHalf option is actually available as a format. This format option is not available for 16bit PNGs. I leave the exr as a default texture (I don’t set it to normal) then in Amplify Shader I do an Unpack Normal Map on the texture read node. No more artifacts. Woo Hoo!
Terrain heightmaps are single channel, and special cased in that you can only import / export them through a terrain component. Unity has had other support for 16 bit depth image formats for a while, it’s just mainly their texture import tools that haven’t kept up with the internal engine support, though BC6H is a nice addition.
Unity’s shaders by default expected them in that swizzled format, but I believe they updated the UnpackNormal() function to also handle RG normals as well as AG normals with a bit of a math trick (multiply the red and alpha channels together and assume AG normals have the red channel white and RG normals have the alpha white), though it will always reconstruct the Z. I’m not sure if Amplify is just calling that same Unity function, or their own.