100% with you. Theory is nice, but even if we have bugs, you’ve still got to ship.
So, I attached an XR Slider. Mostly copy-paste, hacking out some Internal calls, and modifying a single function.
That function is UpdateDrag:
void UpdateDrag(PointerEventData eventData, Camera cam)
{
RectTransform clickRect = m_HandleContainerRect ?? m_FillContainerRect;
if (clickRect != null && clickRect.rect.size[(int)axis] > 0)
{
Vector2 position = Vector2.zero;
if (eventData is TrackedDeviceEventData trackedDeviceEventData)
{
clickRect.GetWorldCorners(s_Corners);
var plane = new Plane(s_Corners[0], s_Corners[1], s_Corners[2]);
var rayPoints = trackedDeviceEventData.rayPoints;
for (var i = 1; i < rayPoints.Count; i++)
{
var from = rayPoints[i - 1];
var to = rayPoints[i];
var rayDistance = Vector3.Distance(to, from);
var ray = new Ray(from, (to - from));
if(plane.Raycast(ray, out var distance))
{
if (distance < rayDistance)
{
var worldPoint = ray.origin + (ray.direction * distance);
position = cam.WorldToScreenPoint(worldPoint);
Debug.DrawLine(from, worldPoint);
break;
}
}
}
}
else
{
position = eventData.position;
}
Vector2 localCursor;
if (!RectTransformUtility.ScreenPointToLocalPointInRectangle(clickRect, position, cam, out localCursor))
return;
localCursor -= clickRect.rect.position;
float val = Mathf.Clamp01((localCursor - m_Offset)[(int)axis] / clickRect.rect.size[(int)axis]);
normalizedValue = (reverseValue ? 1f - val : val);
}
}
I cut out multi-monitor support (used Internal calls), and everything inside of if (eventData is TrackedDeviceEventData trackedDeviceEventData)
is custom. What I do is create create a plane struct that represents the slider button, and then figure out where the pointer ray intersects that virtual plane. I use that to determine my screen position, and then fall through all the same logic as the normal slider. The scrolling area should be very similar, do a search for eventData.position, and replace that value with the raycast against plane code above.
However, this only fixes half the issue. The positioning is now correct, but it doesn’t have the same behaviour as a mouse where it still calculates the position of the scroller even if I’m not pointing at anything. It just sort of stops whenever I point into open space.
I traced this down to the UIInputModule. There is some code in the internal void ProcessTrackedDevice(ref TrackedDeviceModel deviceState, bool force = false)
that is broken on drag. If you look at this blob:
Vector2 screenPosition;
if (eventData.pointerCurrentRaycast.isValid)
{
screenPosition = camera.WorldToScreenPoint(eventData.pointerCurrentRaycast.worldPosition);
}
else
{
var endPosition = eventData.rayPoints.Count > 0 ? eventData.rayPoints[eventData.rayPoints.Count - 1] : Vector3.zero;
screenPosition = camera.WorldToScreenPoint(endPosition);
eventData.position = screenPosition; // <------- This line right here needs to be deleted
}
var thisFrameDelta = screenPosition - eventData.position;
eventData.position = screenPosition;
eventData.delta = thisFrameDelta;
I put a comment on the line that needs to be removed. It’s shorting out the calculation of eventData.delta directly below it. This brings back the ability to update drag while pointing at nothing. This one is a little awkward for you to patch in as you’ll also need to bring in a custom XRUIInputModule and I believe a lot of XRI is internal. You may be able to clone the package and put it locally in your packages folder if you just want to make custom modifications directly.
Please report both these as bugs. I know these systems, and want to help, but I’m working on other things, and doubtful I’ll be the one tackling these issues directly.
Hope this helps!
7300807–883936–XRSlider.cs (29.8 KB)