I managed to implement something where the raycast comes from the camera, and that closely approximates what I want to achieve but it’s not as accurate as I want. I suspect this might have something to do with the fact that the controller is 3DoF, as the camera is 6DoF. I will study the documentation more and experiment with the system more. I just don’t know what’s going on under the hood. I was hoping someone might have some idea about this. Certainly, I am not the only one to have tried to do something like this with the Vive Focus, and similar systems.
What specifically was not working? Be sure that your raycast is using a bitmask, and set your controller and anything attached to it to the “ignore raycast” layer, so the ray isn’t just getting stuck on the controller itself. Once you do that, using the hand transform’s position and .forward property should work fine. I use the hand transform, not the controller.
The ray cast is supposed to collide with a game object, such as a creature, which triggers an event or response from the creature.
When I attach the script to the camera, or in the case of the latest Vive Wave SDK, the center camera, it works just fine.
When I attached the same script to the controller it would strike the terrain most of the time.
I do not think the ray is getting stuck on the controller. Through testing I learned the ray was mostly striking the terrain. This data is based on log info from the collision data. I did not have a visual reference for the raycast when using the headset itself. I found the line renderer code examples to be confusing and I wasn’t confident I could approximate the raycast. It was hard to test in the editor as the simulator didn’t work properly with some versions of the unity engine and wave sdk.
I’m pretty sure at some point, during one of the iterations, I referenced the hand transforms. I’ll double-check though.
I am unfamiliar with bitmask. I’ll study that as well.