I don’t know, because while I do a ton of Go development these days, I don’t use the Oculus prefabs or sample code. They may everything much more complicated than it should be.
Here’s my standard setup:

Player is just a GameObject that acts as the root transform for wherever the player is. You would move this around if you want the user to move around in the world, and rotate it if you implement a snap-turn (say, in response to a swipe on the thumb disc — users who prefer to play seated rather than standing are pretty insistent on this).
Main Camera is the main camera. You don’t need any scripts or anything else there; it rotates automatically. Indeed, the default “Sample Scene” Unity gives you when you create a new project works just fine in VR for looking around. I added a mouse-rotation script for use when testing within the IDE, but that’s strictly a nicety for testing.
ControllerHolder is also there for testing, and it also has a mouse-rotation script (using different modifier keys) for testing in the IDE. Otherwise, it wouldn’t be needed at all. And then under that, Controller is the transform that moves around as the user waves their controller. Inside that you put whatever visual objects you want; in this case, that’s the Beatron disc and glove, plus some other stuff (GUIBeam, etc.) that I activate during menu interaction or whatever. When you’re starting out, you should just stick a cube in there and make sure you can wave it about.
How do you wave it about? This does require a little code, but it’s not hard. Have a script somewhere that, on Update, sets the local rotation of that Controller object according to OVRInput.GetLocalControllerRotation, and sets local position by OVRInput.GetLocalControllerPosition. That’s it.
So my suggestion: throw out all that nonsense, make a simple set-up like above, and get on with it. (And, be sure to join us on the GOmmunity Discord!)