Hello.
I’m converting heightmaps to normals but I get these stepping issues. My texture is 16bit floating point but Unity doesn’t seem to read it that way by default. How can I set up my texture so it reads as floating point in shaders?
Hello.
I’m converting heightmaps to normals but I get these stepping issues. My texture is 16bit floating point but Unity doesn’t seem to read it that way by default. How can I set up my texture so it reads as floating point in shaders?
You’ll need to convert them into normal maps before importing them into Unity. Unity’s texture to bump map conversion will always convert textures into an 8 bit greyscale image before converting them into a normal map. It’s also really, really bad it it and should never be used because it’ll always be horribly stair stepped like this.
The problem is that the heightmaps are created semi-procedurally so they never repeat. So the normals have to be created in the shader depending on what kind of current heightmap has been created.
So then I think I misunderstood what you’re doing. Are you generating these height maps in Unity and looking to derive the normals in a shader?
I’m making a terrain by blending various heightmap textures together with different scales and offsets. The blending operations work on heightmaps but they mess up the normals because of the colors and rotation of the normalmaps has to stay the same.
Why not rotate the normals in the normal maps?
Really, without seeing more of your setup I’m not sure what else I can say. If you’re sampling the texture in the shader, it should be getting the real 16 bit value. If you’re not then I would expect the texture you’re passing to it isn’t actually an RHalf (the 16 bit floating point format).
Rotating the normals messes up the angles its supposed to show. Is there an extra setting I have to apply to the texture inside Unity? The heightmaps are 16bit pngs maybe it needs to be another format other than pngs?
The problem is Unity’s handling of >8 bit texture assets is kind of terrible.
Unity doesn’t properly support 16 bit png files. It’ll read them, but it quantizes them down to 8 bit before the editor uses them. So that means as far as the shader is concerned, it’s an 8 bit texture you’re handing it.
You can use a 32 bit float tiff (Unity doesn’t handle 16 or 24 bit channels from tiff), or an exr file. Both of those will properly retain the float values and not be quantized down to 8 bit. They’ll default to a BC6H file, which is okay quality, but there will be some artifacts in the resulting normal. Ideally you’d want to override the format to use R16 or RHalf, but the R16 format strangely only works when importing 8 bit textures (which makes them worthless), and RHalf isn’t displayed as an option (even though its in the enum the importer uses), only ARGBHalf. That that means if you can channel pack your data, it’s not a bad option. Otherwise your height maps now use 4x times more memory.
Unity will also completely mangle the float data if your project is using Gamma space rendering as it will always imports float image data with a forced gamma curve.
Which brings me back to normal maps.
Yes, which is why you want to counter rotate the normal vector by how much you rotated the normal map UVs.
https://discussions.unity.com/t/652740
Hello,
Is there a way to get fix this? I am using “AssetDatabase.CreateAsset” and it works in gamma space fine. But the file size even more uncompressed texture. For example 512x512 RGBAfloat =4mb
But asset size is 8mb.Why??
I tried using bundles but there are a lot of problems with them. I just want to store the top-view with a depth buffer for water once and read it from the texture. And also I use VAT(vertex animation textures) for fluid/particles simulation using float pixel as position in world space. But with gamma correction it’s incorrect.
Right now I use this code
/******
* The MIT License (MIT)
*
* Copyright (c) 2016 Bunny83
*
* Permission is hereby granted, free of charge, to any person obtaining a copy of
* this software and associated documentation files (the "Software"), to deal in
* the Software without restriction, including without limitation the rights to
* use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
* of the Software, and to permit persons to whom the Software is furnished to do
* so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in all
* copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
* Original source:
* https://www.dropbox.com/s/rqctkisgq178fba/Texture2DExtension.cs?dl=0
*******/
using UnityEngine;
public static class Texture2DExtension
{
public enum DataFormat
{
NONE = 0,
ARGBFloat = 1,
ARGBUShort = 2,
}
#region ARGBFloat
private static void SaveARGBFloatUncompressed(Texture2D aTex, System.IO.BinaryWriter aWriter)
{
int w = aTex.width;
int h = aTex.height;
Color[] colors = aTex.GetPixels();
aWriter.Write((uint)DataFormat.ARGBFloat);
aWriter.Write(w);
aWriter.Write(h);
for (int i = 0; i < colors.Length; i++)
{
Color c = colors[i];
aWriter.Write(c.a);
aWriter.Write(c.r);
aWriter.Write(c.g);
aWriter.Write(c.b);
}
}
private static void ReadARGBFloatUncompressed(Texture2D aTex, System.IO.BinaryReader aReader)
{
int w = aReader.ReadInt32();
int h = aReader.ReadInt32();
Color[] colors = new Color[w * h];
for (int i = 0; i < colors.Length; i++)
{
Color c;
c.a = aReader.ReadSingle();
c.r = aReader.ReadSingle();
c.g = aReader.ReadSingle();
c.b = aReader.ReadSingle();
colors[i] = c;
}
aTex.Resize(w, h);
aTex.SetPixels(colors);
aTex.Apply();
}
#endregion ARGBFloat
#region ARGBUShort
private static void SaveARGBUShortUncompressed(this Texture2D aTex, System.IO.BinaryWriter aWriter)
{
int w = aTex.width;
int h = aTex.height;
Color[] colors = aTex.GetPixels();
aWriter.Write((uint)DataFormat.ARGBUShort);
aWriter.Write(w);
aWriter.Write(h);
for (int i = 0; i < colors.Length; i++)
{
Color c = colors[i];
aWriter.Write((ushort)(c.a * 65535));
aWriter.Write((ushort)(c.r * 65535));
aWriter.Write((ushort)(c.g * 65535));
aWriter.Write((ushort)(c.b * 65535));
}
}
private static void ReadARGBUShortUncompressed(Texture2D aTex, System.IO.BinaryReader aReader)
{
int w = aReader.ReadInt32();
int h = aReader.ReadInt32();
Color[] colors = new Color[w * h];
for (int i = 0; i < colors.Length; i++)
{
Color c;
c.a = aReader.ReadUInt16() / 65535f;
c.r = aReader.ReadUInt16() / 65535f;
c.g = aReader.ReadUInt16() / 65535f;
c.b = aReader.ReadUInt16() / 65535f;
colors[i] = c;
}
aTex.Resize(w, h);
aTex.SetPixels(colors);
aTex.Apply();
}
#endregion ARGBUShort
#region Extensions
public static void SaveUncompressed(this Texture2D aTex, System.IO.Stream aStream, DataFormat aFormat)
{
using (var writer = new System.IO.BinaryWriter(aStream))
{
if (aFormat == DataFormat.ARGBFloat)
SaveARGBFloatUncompressed(aTex, writer);
else if (aFormat == DataFormat.ARGBUShort)
SaveARGBUShortUncompressed(aTex, writer);
}
}
public static void ReadUncompressed(this Texture2D aTex, System.IO.Stream aStream)
{
using (var reader = new System.IO.BinaryReader(aStream))
{
var format = (DataFormat)reader.ReadInt32();
if (format == DataFormat.ARGBFloat)
ReadARGBFloatUncompressed(aTex, reader);
else if (format == DataFormat.ARGBUShort)
ReadARGBUShortUncompressed(aTex, reader);
}
}
#if !UNITY_WEBPLAYER && !UNITY_WEBGL
// File IO versions
public static void ReadUncompressed(this Texture2D aTex, string aFilename)
{
using (var file = System.IO.File.OpenRead(aFilename))
{
aTex.ReadUncompressed(file);
file.Close();
}
}
public static void SaveUncompressed(this Texture2D aTex, string aFilename, DataFormat aFormat)
{
using (var file = System.IO.File.Create(aFilename))
{
aTex.SaveUncompressed(file, aFormat);
file.Close();
}
}
#endif
#endregion Extensions
}
In this case I also can use only R/RG/RGB channels and custom compression.
I wrote the new script that allows me to use compression and the original texture format. Also it’s x10 times faster (or more) and textures are always in linear space with gamma/linear rendering.
using System;
using System.Collections.Generic;
using System.IO;
using System.IO.Compression;
using UnityEngine;
public static class Texture2DExtensions
{
public static Texture2D ReadTextureFromFile(string filename)
{
Texture2D tex;
if (!File.Exists(filename + ".gz")) return null;
using (var fileStream = File.Open(filename + ".gz", FileMode.Open))
{
using (GZipStream decompressionStream = new GZipStream(fileStream, CompressionMode.Decompress))
{
using (MemoryStream stream = new MemoryStream())
{
decompressionStream.CopyTo(stream);
stream.Position = 0;
var rawTextureDataWithInfo = stream.ToArray();
{
var format = (TextureFormat) BitConverter.ToInt32(rawTextureDataWithInfo, 0);
int width = BitConverter.ToInt32(rawTextureDataWithInfo, 4);
int height = BitConverter.ToInt32(rawTextureDataWithInfo, 8);
var rawTextureData = new byte[rawTextureDataWithInfo.Length - 12];
Array.Copy(rawTextureDataWithInfo, 12, rawTextureData, 0, rawTextureData.Length);
tex = new Texture2D(width, height, format, false, true);
tex.LoadRawTextureData(rawTextureData);
tex.Apply();
}
}
}
fileStream.Close();
}
return tex;
}
public static byte[] Combine(byte[] first, byte[] second)
{
byte[] bytes = new byte[first.Length + second.Length];
Buffer.BlockCopy(first, 0, bytes, 0, first.Length);
Buffer.BlockCopy(second, 0, bytes, first.Length, second.Length);
return bytes;
}
public static void SaveToFile(this Texture2D tex, string filename)
{
using (var fileToCompress = File.Create(filename + ".gz"))
{
int w = tex.width;
int h = tex.height;
var rawTextureData = tex.GetRawTextureData();
var textureInfo = new List<byte>();
textureInfo.AddRange(BitConverter.GetBytes((uint) tex.format));
textureInfo.AddRange(BitConverter.GetBytes(w));
textureInfo.AddRange(BitConverter.GetBytes(h));
rawTextureData = Combine(textureInfo.ToArray(), rawTextureData);
using (GZipStream compressionStream = new GZipStream(fileToCompress, CompressionMode.Compress))
{
compressionStream.Write(rawTextureData, 0, rawTextureData.Length);
}
fileToCompress.Close();
}
}
}
I use it for saving depth texture to R32 bit.
var currentRT = RenderTexture.active;
var tempRT = RenderTexture.GetTemporary(depthRT.width, depthRT.height, 0, RenderTextureFormat.RFloat, RenderTextureReadWrite.Linear);
Graphics.Blit(depthRT, tempRT);
RenderTexture.active = tempRT;
var tex = new Texture2D(depthRT.width, depthRT.height, TextureFormat.RGBAFloat, false, true);
tex.ReadPixels(new Rect(0, 0, depthRT.width, depthRT.height), 0, 0);
tex.Apply();
tex.SaveToFile(path);
RenderTexture.active = currentRT;
Unfortunately I can’t save individual channels because readpixel requires all 4 channels
“Unsupported texture format for ReadPixels - needs to be RGBA32, ARGB32, RGB24, RGBAFloat or RGBAHalf”
I tried to use follow code for avoid this problem
var newTex = new Texture2D(depthRT.width, depthRT.height, TextureFormat.RFloat, false, true);
Graphics.ConvertTexture(tex, newTex);
In this case I have new texture2D with only red channel.
But newTex.GetRawTextureData is always returns an array with all “0”. I don’t know how to get rawTextureData in this case.
how can this be solved?
I feel like I’ve managed to copy single channel render textures before using ReadPixels, but maybe I’m misremembering.
A hacky work around would be to create an ARGBFloat render texture and blit the contents of the RFloat into it before reading back to an RGBAFloat Texture2D.
I don’t quite understand how this works, because texture2d in your case “RGBAFloat” but I need raw data only for one channel.
Right, but if Unity isn’t letting you copy data from an RFloat RenderTexture to an RFloat Texture2D, you can use Blit() to copy the RFloat RenderTexture to an ARGBFloat RenderTexture and copy it to an RGBAFloat Texture2D. From there if you wish you can copy the data from the single channel you want back to an RFloat Texture2D to save. Like I said, it’s a hacky work around.
of course that’s exactly what i did first
I used this code
var tempRT = RenderTexture.GetTemporary(depthRT.width, depthRT.height, 0, RenderTextureFormat.RGBAFloat, RenderTextureReadWrite.Linear);
//var tempRT = RenderTexture.GetTemporary(depthRT.width, depthRT.height, 0, RenderTextureFormat.RFloat, RenderTextureReadWrite.Linear);
//I tested both formats and no difference
Graphics.Blit(depthRT, tempRT);
RenderTexture.active = tempRT;
var tex = new Texture2D(depthRT.width, depthRT.height, TextureFormat.RGBAFloat, false, true);
tex.ReadPixels works with any render target format, but texture2D must have “rgba32/half/float”
Right now I don’t know how to copy data from texture2D(RGBA) to texture2D(R) without bug.
As you can see above I use this code for that
var newTex = new Texture2D(depthRT.width, depthRT.height, TextureFormat.RFloat, false, true);
Graphics.ConvertTexture(tex, newTex);
I can see the new texture with R channel in the editor
but when I use newTex.GetRawTextureData(); I see only empty array of bytes = 0.
I’m not entirely sure how the Graphics.ConvertTexture()
function works. There’s not a ton of information on exactly what it does. There is a cryptic comment in the documentation about needing the destination texture to “correspond” to a supported render texture format that makes me think it does a Blit()
to render texture and ReadPixels()
back to the target Texture2D
… Which since the whole reason you’re doing this is because using ReadPixels
to an RFloat
format Texture2D
isn’t working probably means neither is this.
Honestly if I were you I’d report it as a bug, because either it should work, or ConvertTexture
should return an error message for single channel formats.
I think you’re going to have to do this manually. Either use RFloatTex.SetPixels(RGBAFloatTex.GetPixels())
, or call GetRawTextureData()
and manually copy the first 4 of every 16 bytes to a new array that you assign to the texture.
Big thanks! It works.
Related to this for anyone else, you can work directly with 16-bit floating point channels using GetRawTextureData through “C# Half-precision data type” and the additional notes below.
From that just delete everything down from line with “half_tests.cs” in it. Then add this to the top under the namespace…
public struct HalfColor
{
public Half r;
public Half g;
public Half b;
public Half a;
}
You create a texture using the format like this…
Texture2D tex = new Texture2D(w, h, TextureFormat.RGBAHalf);
You get the NativeArray like this (don’t forget using namespace SystemHalf;
)…
NativeArray<HalfColor> pixels = tex.GetRawTextureData<HalfColor>();
You set a pixel like this (apply as usual when all done with tex.Apply();
)…
pixels[index] = new HalfColor { r = (Half)0.5f, g = 0, b = 0, a = 1 };
I had almost given up on the format just as I got it working …hopefully this Half data type is cross-platform compatible? There must be a reason it’s not part of Unity to begin with… ?
It is. It’s part of the Mathematics package.
https://github.com/Unity-Technologies/Unity.Mathematics/blob/d2dcf30f1dbd395171350304442c6a4f647cf290/src/Unity.Mathematics/half.cs
However the reason why it’s not included with default c# is kind of mentioned in the comments for the SystemHalf you linked above.
It’s really slow.