Errors trying to get input shape from ONNX model: AssertionException: Cannot get value of dim with type != DimType.Value

I’m getting an error trying to run this code to create an input tensor from the model data:

TensorShape shape = new TensorShape(runtimeModel.inputs[0].shape.ToTensorShape());
Tensor inputTensor = new TensorFloat(shape, floatData);
AssertionException: Cannot get value of dim with type != DimType.Value
Assertion failure. Value was False
Expected: True
UnityEngine.Assertions.Assert.Fail (System.String message, System.String userMessage) (at <3606bacee77f4acf91b1c3a0dc94aeb7>:0)
UnityEngine.Assertions.Assert.IsTrue (System.Boolean condition, System.String message) (at <3606bacee77f4acf91b1c3a0dc94aeb7>:0)
Unity.Sentis.Logger.AssertIsTrue (System.Boolean condition, System.String msg) (at ./Library/PackageCache/com.unity.sentis/Runtime/Core/Logger.cs:48)
Unity.Sentis.SymbolicTensorDim.get_value () (at ./Library/PackageCache/com.unity.sentis/Runtime/Core/ShapeInference/SymbolicTensorDim.cs:87)
Unity.Sentis.SymbolicTensorShape.ToTensorShape () (at ./Library/PackageCache/com.unity.sentis/Runtime/Core/ShapeInference/SymbolicTensorShape.cs:125)
SentisMovement.testSentis () (at Assets/Scripts/SentisMovementScript.cs:103)
SentisMovement.Start () (at Assets/Scripts/SentisMovementScript.cs:144)

Hardcoding the shape like this works, but was hoping not to have to do this:

TensorShape shape2 = new TensorShape(1, 2004);

Here’s a screenshot of my model Inspector:

I am assuming you are doing
model.input[0].shape.ToTensorShape()?

You see that your model input has a dynamic shape (?, 2004) therefor we cannot convert that to a fixed TensorShape.

Yes, below is the loading function where I’m trying to hold the worker with known input shape, so I don’t have to hard code or create them later. Goal here is not to hardcode the input shape. I may not be understanding the intended setup flow as this feels a little hacky. Thoughts?

private void LoadModel() {
        // Load the metadata json
        ModelMetadata metadata = JsonUtility.FromJson<ModelMetadata>(modelMetadataAsset.text);
        modelLabels = metadata.labels;

        // Load the model asset
        runtimeModel = ModelLoader.Load(modelAsset);

        // Loop through each input
        List<Model.Input> inputs = runtimeModel.inputs;
        
        // Create TensorShape from shape (unk__11, 2004)
        // hardcoding the shape works: 
        // modelTensorShape = new TensorShape(1, 2004);
        // Get the shape from the model
        List<int> shapeDims = new List<int>();

        for( int i = 0; i < inputs[0].shape.rank; i++) {
            shapeDims.Add(inputs[0].shape[i].isValue ? inputs[0].shape[i].value : 1);
        }
        
        // run a map function to get the shape
        ReadOnlySpan<int> shape = shapeDims.ToArray();
        modelTensorShape = new TensorShape(shape);

        // Create the worker
        // TODO: should this be kept in memory or disposed of after each use?
        modelWorker = WorkerFactory.CreateWorker(BackendType.GPUCompute, runtimeModel);
    }

As I mentioned, your input shape is dynamic.
You can call your model with input (1, 2004), (2, 2004), (10, 2004) it all works.
That is represented by a SymbolicTensorShape which is in the Input struct.
this symbolic shape is thus partly known. you can check it with IsFullyKnown
Therefor we cannot convert it to a known TensorShape :slight_smile:
If you know you will allways call your model with a shape of (1, 2004) then you need to build a new tensorshape

var knownShape = new TensorShape(1, inputShape[1].value);

Hope that help, we’ll add more to the documentation regarding this. I understand it can be confusing

1 Like

Thanks! I suppose there’s also a way to build the model with a known shape of (1, 2004). Will try that as well.

UPDATE: I’ve recently learned that ONNX conversion does not directly support static batch sizes in the input shape. Makes sense, so accommodating for this in the code that uses the model seems like the right path.

This code seems to do the trick if we’re running the model one batch/row at a time:

private void LoadPoseModel() {
        // Load the metadata json
        ModelMetadata metadata = JsonUtility.FromJson<ModelMetadata>(modelMetadataAsset.text);
        modelLabels = metadata.labels;

        // Load the model asset
        runtimeModel = ModelLoader.Load(modelAsset);

        // Loop through each input
        List<Model.Input> inputs = runtimeModel.inputs;
        TensorShape[] inputShapes = new TensorShape[inputs.Count];

        // ONNX model input shapes have a dynamic batch size, 
        // so we need to set the first dimension to 1, else the model will fail to execute
        foreach (Model.Input input in inputs) {
            // make an array of shape dimensions from input shape, 
            // but replace the first dimension (batch size) with 1
            int[] shapeDims = new int[input.shape.rank];
            shapeDims[0] = 1;
            for (int i = 1; i < input.shape.rank; i++) {
                shapeDims[i] = input.shape[i].value;
            }        
            
            // run a map function to get the shape
            ReadOnlySpan<int> shape = shapeDims;
            inputShapes[inputs.IndexOf(input)] = new TensorShape(shape.ToArray());
        }

        // Set the input tensor shapes
        modelInputTensorShapes = inputShapes;

        // Create the worker
        // TODO: should this be kept in memory or disposed of after each use?
        modelWorker = WorkerFactory.CreateWorker(BackendType.GPUCompute, runtimeModel);
    }   
// create a tensor from the float data 
        Tensor inputTensor = new TensorFloat(modelInputTensorShapes[0], floatData);

        // Execute the model
        modelWorker.Execute(inputTensor);

        // Get the output tensor
        // TODO: is PeekOutput the right method to use?
        TensorFloat outputTensor = modelWorker.PeekOutput() as TensorFloat;
        outputTensor.MakeReadable();
        float[] outputData = outputTensor.ToReadOnlyArray();

This is known internally as Task 454.