How to implement select, move and rotate objects in VR mode. In MR mode through EnhancedSpatialPointerSupport can know is interactive objects, but at the time of VR model, through VisionosSpatialPointerState cannot know are interactive objects. How should it be achieved
Hey there! Thanks for reaching out with your question.
Input in VR mode works similarly to a mouse or controller, in that you are notified about the what/where of the input, and it’s up to you to figure out what to do with it. In the most basic case, you would put a collider on the object you want to interact with, and call Physics.Raycast
using the eye gaze vector to decide what was hit. If you look at InputTester.cs
in the com.unity.xr.visionos
package samples (installed via package manager), that should give you a good idea of how to accomplish this.
Thanks again for reaching out and good luck!
I looked at the sample, when the “phase” is" moved" , I through startRayOrigin startRayDirection startRayRotation, interactionRayRotation get starting point and direction of the ray. But I don’t think I got the rays exactly right. Can you provide an example to help me solve this problem
Sorry, I’m not sure how to parse this sentence.
Could you provide the code you are using? I might be able to identify the issue, but I don’t currently have enough information to help.
i don’t know what the problem is, and I’m not sure there’s a better example than the code in InputTester.cs
. Maybe it will help if I decorate that example with some comments explaining what each line of code is doing?
var phase = pointerState.phase;
// Create a bool variable that is true if the touch phase is "Began," meaning this is the
// first frame that the user pinched their fingers
var began = phase == VisionOSSpatialPointerPhase.Began;
// Create a bool variable that is true if the touch phase is "Began" or "Moved," meaning the
// user is pinching their fingers this frame
var active = began || phase == VisionOSSpatialPointerPhase.Moved;
// Cache a reference to the object that will visualize the "device" (pinch) pose.
var deviceTransform = objects.Device;
// Cache a reference to the object that will visualize the gaze ray.
var rayTransform = objects.Ray;
// Ensure these objects are active
deviceTransform.gameObject.SetActive(active);
rayTransform.gameObject.SetActive(active);
// Do the stuff inside this scope on the first frame that the user pinched their fingers
if (began)
{
// Transform the ray origin position into the same local space as the camera. This is the
// world-space position of the ray that we can use for the raycast.
var rayOrigin = m_CameraOffset.TransformPoint(pointerState.startRayOrigin);
// Transform the ray direction into the same local space as the camera. This is the
// world-space rotation of the ray that we can use for the raycast.
var rayDirection = m_CameraOffset.TransformDirection(pointerState.startRayDirection);
rayTransform.SetPositionAndRotation(rayOrigin, Quaternion.LookRotation(rayDirection));
// Create a ray that begins at rayOrigin and points in the direction of RayDirection.
var ray = new Ray(rayOrigin, rayDirection);
// Perform a raycast using that ray to see if it hits any colliders.
var hit = Physics.Raycast(ray, out var hitInfo);
// Cache a reference to the object that will visualize the interaction position
// (where the ray hit the object).
var targetTransform = objects.Target;
// Set that visualization object to active.
targetTransform.gameObject.SetActive(hit);
// Set the position of the visualization object.
targetTransform.position = hitInfo.point;
}
// If the user is pinching their fingers this frame, update the transform (set position/rotation)
// of the object that visualizes the "device" (pinch) pose.
if (active)
deviceTransform.SetLocalPositionAndRotation(pointerState.inputDevicePosition, pointerState.inputDeviceRotation);
Hopefully this helps clear things up?
I want the code in Picture 1 and picture 2 to implement the rotating movement function in picture 3, but the code in picture 2, when my input moves fast, the object moves incorrectly. How can I implement this function and hope you can help me
OK I think I see the issue. You are doing the raycast on every frame, which means that the interactionRay
must still stay within the bounds of the object. So as you move your hand quickly, there is a frame where the ray gets ahead of the object, no longer intersects, and you fail to pass the check on line 54. Instead, you can just check if your selectedObj
field is not null along with your else if
on line 48.
So for example, I think it will work with the following modifications:
- Line 48:
else if (primaryPointer.phase == VisionOSSpatialPointerState.Moved && selectedObj != null)
- Remove line 54 and 56
Does that solve the issue?
I think I know what you mean, but I have other puzzles. If I removed lines 54 and 56, how do I get to the current is the location of the interaction, is similar to “SpatialPointerState. InteractionPosition” in MR mode.
Unfortunately InteractionPosition
isn’t a thing in VR/Metal mode. The OS doesn’t know where your 3D objects are, so it can only give you the gaze ray and “device position,” which represents where your pinched fingers are.
You should be able to use that device position to figure out where the interaction position would be if this were MR mode. Specifically, if you take the offset from device position to the raycast hit position, and then continue to add that vector to device position, you will have an interaction position that updates as the user moves their hand. Does that make sense?
For example, say that you do a pinch and the ray hits a cube that is 1 meter in front of your fingers, giving an interaction position of (0, 0, 1). For simplicity, let’s assume your pinched fingers started at (0, 0, 0), so the device position is (0, 0, 0) and the offset between your fingers and the interaction position is (0, 0, 1). As you move your pinched fingers to the right, you end up with a “device position” of (0.1, 0, 0). Taking that initial offset and applying it to the new device position, we now have an interaction position of (0.1, 0, 1) and we don’t need to rely on a raycast hitting the cube.
Does that make sense?