Shape recognition from controller input in VR

Hello, I was wondering if there was a generally accepted way to map controller gestures to various gameplay effects. Ex: imagine a wizard's wand casting fire when a "z" shape is drawn and lightning when a "circle" is drawn.

I've done a decent amount of research, but am ultimately coming up close to empty. I'd like this to be entirely controller position based (and as such, will not utilize hand tracking). The general accepted solution appears to be a neural network trained upon a set of gestures that you can then map to whatever action you so choose. I'm also curious how computation heavy a net would be. I assume it'd be lightweight at runtime as it's more or less just a function of inputs.

I found this on the asset store: https://assetstore.unity.com/packages/tools/behavior-ai/vr-magic-gestures-ai-88011#description but it looks to be abandoned and would require some work to get it functional with the current oculus libraries.

Just wanted to see if anyone had any approaches or recommendations before I started diving into this library refactor. (Side note: I am using ECS, so a data oriented network would be ideal)

If your needs are relatively simple, a form of the $1 family of recognizers can be extended to 3d. (It was originally developed for 2d handwriting input.) These work by normalizing the set of strokes or point cloud input and then comparing that to a stored set of gestures (should be ECS friendly).

If you do go with a neural net, take a look at the Unity Sentis package, which supports running already trained neural nets in a Unity app.

These both look like wonderful resources, thank you