Been implementing UI Toolkit using render textures to draw the ui into the hdrp world space with canvas, so I can render an object ontop of it using volumes.
To do this I use a canvas with a single render texture as the image. I set the scaling mode for the canvas to be set to the render textures size which is 2560*1440, then do same scaling size for the ui settings with the same mode and size.
If you have the image centered and scaled by the canvas, and place the panel events and panel Raycaster on the canvas and hit play with a ui document open the ui will not respond to mouse input correctly as it takes screen space input and maps it to the base render texture size. This is problematic as it doesn’t allow windows to be resized or scaled at runtime without changing the render texture out with a new one every time.
I have done this and had to use events to trigger is dirty on the material of the image. It works but is a Janky workaround for something that is built in and you can’t change.
I tried to make my own panel Raycaster but the base class and interface as well as the utilities functions are internal and not accessable from outside the unity package itself.
I can provide pictures later if need be. It should also be noted that I’m using a virtual mouse device from the input system to drive input. I’m on unity 2022.1.0b11 using the latest version of the unity ui toolkit.
It would be nice to have this fixed, and a must have would be compatibility with the canvas ugui image element size directly for the render texture/panel raycasting size. Making this change would allow for screen space floating UI Documents with little to no frustration or even need for more integration.
UI Toolkit cannot be aware of any manipulation done outside, but mouse event can be remapped using SetScreenToPanelSpaceFunction on the PanelSettings.
I did some 3d UI remapping the mouse input from a mesh using on a physics raycast and then finding the U.V of the texture. It is essentially just an extra game object to add to your scene and populate the fields. Your case seems similar, so you could possibly adapt this example. I think it would even be simpler as you could go for geometric transformations instead of figuring out the uv of the texture.
Let me know if this solve your problem.
namespace Samples.Runtime.Rendering
{
public class UITextureProjection : MonoBehaviour
{
public Camera m_TargetCamera;
/// <summary>
/// When using a render texture, this camera will be used to translate screencoodinates to the panel's coordinates
/// </summary>
/// <remarks>
/// If none is set, it will be initialized with Camera.main
/// </remarks>
public Camera targetCamera
{
get
{
if (m_TargetCamera == null)
m_TargetCamera = Camera.main;
return m_TargetCamera;
}
set => m_TargetCamera = value;
}
public PanelSettings TargetPanel;
private Func<Vector2, Vector2> m_DefaultRenderTextureScreenTranslation;
void OnEnable()
{
if (TargetPanel != null)
{
if (m_DefaultRenderTextureScreenTranslation == null)
{
m_DefaultRenderTextureScreenTranslation = (pos) => ScreenCoordinatesToRenderTexture(pos);
}
TargetPanel.SetScreenToPanelSpaceFunction(m_DefaultRenderTextureScreenTranslation);
}
}
void OnDisable()
{
//we reset it back to the default behavior
if (TargetPanel != null)
{
TargetPanel.SetScreenToPanelSpaceFunction(null);
}
}
/// <summary>
/// Transforms a screen position to a position relative to render texture used by a MeshRenderer.
/// </summary>
/// <param name="screenPosition">The position in screen coordinates.</param>
/// <param name="currentCamera">Camera used for 3d object picking</param>
/// <param name="targetTexture">The texture used by the panel</param>
/// <returns>Returns the coordinates in texel space, or a position containing NaN values if no hit was recorded or if the hit mesh's material is not using the render texture as their mainTexture</returns>
private Vector2 ScreenCoordinatesToRenderTexture(Vector2 screenPosition)
{
var invalidPosition = new Vector2(float.NaN, float.NaN);
screenPosition.y = Screen.height - screenPosition.y;
var cameraRay = targetCamera.ScreenPointToRay(screenPosition);
RaycastHit hit;
if (!Physics.Raycast(cameraRay, out hit))
{
return invalidPosition;
}
var targetTexture = TargetPanel.targetTexture;
MeshRenderer rend = hit.transform.GetComponent<MeshRenderer>();
if (rend == null || rend.sharedMaterial.mainTexture != targetTexture)
{
return invalidPosition;
}
Vector2 pixelUV = hit.textureCoord;
//since y screen coordinates are usually inverted, we need to flip them
pixelUV.y = 1 - pixelUV.y;
pixelUV.x *= targetTexture.width;
pixelUV.y *= targetTexture.height;
return pixelUV;
}
}
}
Thanks, eventually I had to do this anyway for making the UI Toolkit compatible with worldspace UI in general. Didn’t think about at the time that it would also work for the screenspace ui.
We are moving a large part of the editor to UI toolkit, so we are currently tackling performance improvement before anything else, but we will hopefully be back soon to improve these workflows.
Personally, I think user may prefer having z-index before world space ui, but I may be mistaken!