On the left is the output of running the precompiled model in Python and on the right is the output generated in Unity which goes from output → RenderTexture → Texture2D
It clearly looks like a colorspace issue to me, but nothing I try makes it come out as expected. I’ve tried various linear/srgb settings on the RenderTexture as well as various Linear/sRGB calculations to the pixel values in the Texture2D but nothing I do makes it look like expected.
using System;
using Sirenix.OdinInspector;
using Unity.Barracuda;
using UnityEngine;
using UnityEngine.UI;
using Random = UnityEngine.Random;
public class Generator_female : MonoBehaviour
{
public int seed;
public int noiseSize;
public int imageSize;
public float colorAdjust = 2.2f;
public NNModel modelAsset;
public Texture2D portrait;
public RawImage destination;
System.Random rand = new System.Random();
[Button]
public void CreatePortrait()
{
var mean = 0f;
var stdDev = 1f;
Tensor input = new Tensor(64, 1,1,noiseSize);
// Debug.Log($"Tensor Sequence Length = {input.length}");
for (int i = 0; i < input.length; i++)
{
double u1 = 1.0-rand.NextDouble(); //uniform(0,1] random doubles
double u2 = 1.0-rand.NextDouble();
double randStdNormal = Math.Sqrt(-2.0 * Math.Log(u1)) *
Math.Sin(2.0 * Math.PI * u2); //random normal(0,1)
double randNormal =
mean + stdDev * randStdNormal; //random normal(mean,stdDev^2)
input[i] = (float)randNormal;
}
m_Worker.Execute(input);
Tensor O = m_Worker.PeekOutput();
input.Dispose();
var rTexture = new RenderTexture(imageSize, imageSize, 24, RenderTextureFormat.Default, RenderTextureReadWrite.sRGB);
O.ToRenderTexture(rTexture);
portrait = toTexture2D(rTexture);
destination.texture = portrait;
O.Dispose();
rTexture.DiscardContents();
}
private Model m_RuntimeModel;
IWorker m_Worker;
void Start()
{
Random.InitState(seed);
m_RuntimeModel = ModelLoader.Load(modelAsset);
m_Worker = WorkerFactory.CreateWorker(WorkerFactory.Type.Compute, m_RuntimeModel);
}
void OnDestroy()
{
m_Worker.Dispose();
}
Texture2D toTexture2D(RenderTexture rTex)
{
Texture2D tex = new Texture2D(rTex.width, rTex.height, TextureFormat.RGB24, false);
RenderTexture.active = rTex;
tex.ReadPixels(new Rect(0, 0, rTex.width, rTex.height), 0, 0);
var pixels = tex.GetPixels();
for (int i = 0; i < pixels.Length; i++)
{
pixels[i] = LinearToGamma(pixels[i]);
}
tex.SetPixels(pixels);
tex.Apply();
return tex;
}
Color LinearToGamma(Color c)
{
return new Color(Mathf.LinearToGammaSpace(c.r), Mathf.LinearToGammaSpace(c.g), Mathf.LinearToGammaSpace(c.b),
c.a);
}
}
Shouldn’t the output of the network be the output of the network irrespective of color spaces? I mean it knows nothing about such nuances, it just spits out numbers. Is ToRenderTexture forcibly applying changes to the numbers? Can that be turned off?
Even if it is, why does applying colorspace math to the pixels afterward not fix it?
Training up an entire other model to apply a new style to the already generated images feels like a very wrong approach to me.
Except I don’t want my project to be in sRGB This weekend I’m going to experiment more and see if I can just grab the raw data out of the output tensor manually and not go through ToRenderTexture.
Still seems I should have the option to tell it to NOT mess with the numbers at all.
Changing project color space between Gamma or Linear has ZERO effect
Directly grabbing the output tensor data and pumping it into a Texture2D produces the exact same incorrect output.
I’ve also verified it isn’t an issue with the display of the texture, by writing out the Texture2D to a PNG file and inspecting that outside Unity. The PNG is incorrectly dark as well.
Again what really confuses me here is, why should any of that matter? All the model knows is that for a given input it generates a given output. Given that the input to the model in Unity is the same as the input in Python, why would it not generate the same output?
As a further test I literally wrote out the exact noise definition from Python and fed it into the Tensor in Unity. So they both had the exact same input.
Im on Universal Render Pipeline 2019 and changing from gamma to linear changes the output so perhaps you’re in another version?
Have you tried writing out the same image from your model in Python to see if its as dark?
The image in the original post is comparing an image written out in Python to an image created in Unity. Furthermore the test I just did compared the output in Python to the output in Unity using the exact same noisefield, created in Python, written to JSON then loaded into the Python and Unity ONNX code to produce the same output, yet again the one in Unity is darker and the on in Python is as expected.
This is in Unity 2020.1.f1 but using built-in, not SRP
I think you meant that for me? This is what I am already doing. Comparing the execution of the model via ONNX in both Python and Unity.
EDIT: There certainly might still be something I am missing, as ML is new to me. I will try to take some time to set up an easily shareable sample in both Python and Unity for test purposes.
Apologies, @jwvanderbeck . Yes, I meant that for you.
What framework did you create the model in (e.g. pytorch)? That would be great if you would be willing to share your model and sample with me, so I can look into it further.
As I understand it the above should be using the same runtime. I am going to rework it though just to be absolutely sure it isn’t an input issue, to actually read in the EXACT same noise data for input that I am using in Unity.
I have not and to be honest I completely forgot about this! Between moving and slammed at work, I just plum forgot. I’ll try to find time to revisit it this weekend and get things packaged up.