V6.024 Important Updates*
- fixed FMDesktop Capture lag and memory leak issue
V6.024 Important Updates*
V6.025
This is an amazing product, I’ve been using it for flat streaming from one computer to another which is pretty simple.
The only part that is stumping me is allowing to input map through OpenXR to send mouse input to the remote computer, in a world space UI.
When you start a new project in Unity 6 it allows you to create VR project (OpenXR) and has profiles setup for different headsets without the reliance of using the Meta SDK.
The sample scene is already setup with world space UI’s, it would be great to add a remote desktop example in.
Is there an example on how to setup FMRemoteDesktop to use OpenXR to support multiple headsets and use the input mappings for mouse emulation?
Thanks for point this out, will check and see if we can make a template for it.
Meanwhile, you can still refer to this simple joystick mapper for your implementation on different setup.
Thanks, I’ll look into this some more.
This was another issue I noticed, unless I’m missing setting.
If I create a world space UI with the Raw Image for the view when you run it the preview quad is created at origin which is set to screen space UI.
If I set the preview to none the connected desktop will not show in the UI.
So the work around is allow it to create the preview quad wait a couple milliseconds and disable it, then it shows on the world space UI.
using FMSolution.FMETP.FMRemoteDesktop;
using System.Threading.Tasks;
using UnityEngine;
public class DisablePreviewQuad : MonoBehaviour
{
[SerializeField] private int WaitMs = 250;
private async void Start()
{
await Task.Delay(WaitMs);
var foundObject = GameObject.FindFirstObjectByType<FMRemoteDesktopViewer>();
if (foundObject == null) return;
foundObject.gameObject.SetActive(false);
}
}
By default, the viewer prefab quad should be in 3D world space(canvas).
Probably in some version of Unity, the metadata of prefab changed to Overlay instead of 3D world space?
You will need the FMRemoteDesktopViewer Quad for input, because there is a collider for raycast purpose. (your joystick ray will get the relative position from your viewer’s collider)
This is an amazing asset. We are using v4.0 and are loving it.
However, we would appreciate help with one use case:
We are showing a UI in form of a HUD to our VR user using an Overlay Camera which we stack onto the Main Cam of the XR Rig. To stream the user’s view to desktop, we are using a separate RenderCam which we move along with the Main Cam (bc FMETP’s MainCam mode
does not work with our setup of XR Rig from the Interaction toolkit + URP; I’m not sure which one is the culprit, but this was reported here before). However, we could not yet find a way which allows us to also stream the content of the overlay cam. If we stack it on the render cam used for the FMETP stream, it is still not streamed. Any suggestions?
Do you have an example scene of your setup, and reach us via email?
technical support: thelghome@gmail.com
As far as I know, the overlay camera is only visible for main eyes.
In the FMETP STREAM 6, we improved the full screen capture mode(better performance), which may solve your need.
Otherwise, the traditional solution would be, use World Space Canvas instead of Overlay Canvas(placing it in front of your camera, in 3D space), for better performance.
In FMETP STREAM 6, the MainCam mode supports URP VR now.
You may send us example scene to verify it if needed.
It’s possible to eliminate the need for the quad and use the canvas directly. Just add a box collider sized to the canvas scale. On the RawImage place your Unlit/FMDesktopViewer material. I haven’t hooked it to the actual material yet but it shouldn’t be hard.
using TMPro;
using UnityEngine;
using UnityEngine.UI;
//This is testing code so it isn't very optomized.
//Place this on a empty gameobject and hook up the values
public class VRCanvasRaycast : MonoBehaviour
{
//Doesn't need to be set here but for testing
public float ScreenWidth = 1920;
public float ScreenHeight = 1080;
public Transform forwardRay; // Assign the empty GameObject that represents the ray direction(forward) ie: attached to vr controller
public Canvas canvas; // Assign the world space canvas
public RawImage rawImage; // Assign the RawImage component
public TextMeshProUGUI outText; // Just for display
public LayerMask uiLayerMask; // Select the UI layer
private RectTransform canvasRect; // The size of canvas
private void Start()
{
// Cache the canvas rect
if(canvas != null) canvasRect= canvas.GetComponent<RectTransform>();
}
void Update()
{
Ray ray = new Ray(forwardRay.position, forwardRay.forward);
RaycastHit hit;
if (Physics.Raycast(ray, out hit,Mathf.Infinity,uiLayerMask))
{
// Debug.Log(hit.collider.gameObject.name);
// See if we hit the canvas with collider
if (hit.collider.gameObject == canvas.gameObject)
{
// Get the pixel location
Vector2 pixelUV = GetPixelHitPosition(hit.point);
// Convert to int
int ux = Mathf.RoundToInt(Mathf.Clamp(pixelUV.x,0,ScreenWidth));
int uy = Mathf.RoundToInt(Mathf.Clamp(pixelUV.y,0,ScreenHeight));
// Display wha pixel the ray is pointing at
outText.SetText($"({ux},{uy})");
}
}
}
Vector2 GetPixelHitPosition(Vector3 worldHitPosition)
{
// Convert world-space hit point to local position relative to the Canvas
Vector3 localHitPosition = canvasRect.transform.InverseTransformPoint(worldHitPosition);
// Get actual Canvas size (ignoring world scale)
Vector2 canvasSize = canvasRect.sizeDelta; // Should be 1920x1080 or whatever res you want to set
// Normalize local position to [0,1] based on canvas size
float normalizedX = (localHitPosition.x + (canvasSize.x * 0.5f)) / canvasSize.x;
float normalizedY = (localHitPosition.y + (canvasSize.y * 0.5f)) / canvasSize.y;
// Convert to 1920x1080 pixel coordinates in this example
float pixelX = normalizedX * ScreenWidth;
float pixelY = (1-normalizedY) * ScreenHeight;
return new Vector2(pixelX, pixelY);
}
}
Thanks for your suggestion, we will consider it.