Hi all,
This question is about render textures, tensors, and barracuda (ml-agents).
I have written a piece of code that gets two render textures, and feeds them into a barracuda-embedded object classifier model.
/
// prepare model
var worker = BarracudaWorkerFactory.CreateWorker(BarracudaWorkerFactory.Type.ComputePrecompiled, model);
var textures = new[] {texture_1, texture_2};
var tensor = new Tensor(textures, channelCount);
worker.Execute(tensor);
var Out = worker.Peek();
List<double> vector1 = new List<double>(128);
List<double> vector2 = new List<double>(128);
for (int i = 0; i < 128; i++)
{
vector1.Add(Out[0, 0, 0, i]);
vector2.Add(Out[1, 0, 0, i]);
}
I’ve placed this in the Update() function, and the two different textures inputed into the barracuda .nn model generate different model outputs (vector1 & vector2), which is what I expect.
However, when I put this piece of code in my own function and call it from the
public override void AgentAction(float[ ] vectorAction, string textAction) {} function, I get no difference between the two outputs.
When I look inside the render Texture’s pixels, they are really different, so it must have something to do with either the Tensor, or the model.
It feels like it has something to do with timing.
Maybe the AgentAction function executes at a moment where there’s no resources for the barracuda model or something?