I am creating a menu for VR that is made to look like it is being displayed on a TV in a room where you sit. When placing objects in front of the TV to make UI elements rotation will clip elements and the freedom of how elements are displayed on the TV is limited.
Therefore I want to render the menu from a separate camera and use that separate cameras view as a render texture on the TV.
How can I wire this up so that interaction on the virtual TV gets propagated thru the secondary camera such that the Unity UI events can be used safely.
In other words, I want to proxy all interaction from the main camera thru the secondary camera with its render texture used as viewport.
I have come up with an intermediate solution that I want to share even though it has some drawbacks which I think can be solved when I understand the InputModules and EventSystem better.
Providing my current implementation to help people reading this understand the problem better.
using UnityEngine;
using UnityEngine.EventSystems;
public class TVScreen : MonoBehaviour, IPointerClickHandler
{
// Camera that provides the image on a TV.
public Camera tvCamera;
// Callback that gets the collider the TV camera hit thru the TV
public delegate void OnHit(Collider collider);
public void Awake()
{
// Set the TV camera to render to the render texture of this object.
RenderTexture tvScreen = (RenderTexture)GetComponent<MeshRenderer>().material.mainTexture;
tvCamera.targetTexture = tvScreen;
}
public void OnPointerClick(PointerEventData eventData)
{
// When a user clicks this GameObject, we send it down to our Proxy method.
ProxyPointerEvent(eventData, "OnMouseDown");
}
private void ProxyPointerEvent(PointerEventData eventData, string messageDestination)
{
// Create a ray from the PointerEventData which is used to figure out texture coordinate
// of the render texture for the camera.
// These coordinates may be available from the PointerEvent immediately in some form,
// but I have not been able to find a direct substitute.
Ray eyeRay = eventData.pressEventCamera.ScreenPointToRay(eventData.position);
// Send this ray to our local Raycast method, and provide a callback which is called with
// the collider that is hit "behind" the TV.
Raycast(eyeRay, collider => collider.gameObject.SendMessage(messageDestination));
}
public void Raycast(Ray eyeRay, OnHit onHit)
{
RaycastHit eyeHit, tvHit;
// Cast the incoming ray to be able to access the textureCoord that is ultimately provided
// by the camera providing the TV image.
Physics.Raycast(eyeRay, out eyeHit);
// Create second ray that extends from the Camera providing the TV image
Ray tvRay = tvCamera.ViewportPointToRay(new Vector3(eyeHit.textureCoord.x, eyeHit.textureCoord.y, 0));
Debug.DrawRay(tvRay.origin, tvRay.direction * 10, Color.red, 3);
// Cast the secondary ray from the TV image camera.
if (Physics.Raycast(tvRay, out tvHit))
{
Debug.Log(tvHit);
// Call callback if collider was hit.
onHit(tvHit.collider);
}
}
}
Wanted to update this and add the current solution which is more clean and honors all built in PointerEvents.
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
using UnityEngine.EventSystems;
public class VirtualScreen : GraphicRaycaster
{
public Camera screenCamera;
// Called by Unity when a Raycaster should raycast because it extends BaseRaycaster.
public override void Raycast(PointerEventData eventData, List<RaycastResult> resultAppendList)
{
Ray ray = eventCamera.ScreenPointToRay(eventData.position); // Mouse
RaycastHit hit;
if (Physics.Raycast(ray, out hit))
{
RaycastBeyondTV(hit, resultAppendList);
}
}
private void RaycastBeyondTV(RaycastHit originHit, List<RaycastResult> resultAppendList)
{
// Figure out where the pointer would be in the second camera based on texture position or RenderTexture.
Vector3 virtualPos = new Vector3(originHit.textureCoord.x, originHit.textureCoord.y);
Ray ray = screenCamera.ViewportPointToRay(virtualPos);
Debug.DrawRay(ray.origin, ray.direction * 10, Color.red, 0.2f);
RaycastHit hit;
if (Physics.Raycast(ray, out hit))
{
RaycastResult result = new RaycastResult
{
gameObject = hit.collider.gameObject,
module = this,
distance = hit.distance,
index = resultAppendList.Count,
worldPosition = hit.point,
worldNormal = hit.normal,
};
resultAppendList.Add(result);
}
}
}
What object do you attach a GraphicRaycaster to?
I attached it to the game object that held the texture and got a null reference looking for a canvas.
So I stopped and thought and looked around, and noticed that the canvas had a GraphicsRaycaster on it. So I removed the default one and added this one. Now I get tons of null references eventCamera is now null. Thatâs a part of the base Raycaster. Is there something Iâm missing?
Hey Happened across this just now while looking for a virtual screen interaction solution.
The code above doesnât seem to work with Unityâs new UI system (or I wasnât able to make it work) as the Physics.Raycast() call doesnât detect anything I put in the canvas that Iâm displaying via the rendertexture.
However, with a minor modification, this does seem to work with the new UI:
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
using UnityEngine.EventSystems;
public class VirtualScreen : GraphicRaycaster
{
public Camera screenCamera; // Reference to the camera responsible for rendering the virtual screen's rendertexture
public GraphicRaycaster screenCaster; // Reference to the GraphicRaycaster of the canvas displayed on the virtual screen
// Called by Unity when a Raycaster should raycast because it extends BaseRaycaster.
public override void Raycast(PointerEventData eventData, List<RaycastResult> resultAppendList)
{
Ray ray = eventCamera.ScreenPointToRay(eventData.position); // Mouse
RaycastHit hit;
if (Physics.Raycast(ray, out hit))
{
if (hit.collider.transform == transform)
{
// Figure out where the pointer would be in the second camera based on texture position or RenderTexture.
Vector3 virtualPos = new Vector3(hit.textureCoord.x, hit.textureCoord.y);
virtualPos.x *= screenCamera.targetTexture.width;
virtualPos.y *= screenCamera.targetTexture.height;
eventData.position = virtualPos;
screenCaster.Raycast(eventData, resultAppendList);
}
}
}
}
Using this, hooked up to the correct camera and GraphicRaycaster, I was able to mouse over and click on standard canvas-based buttons displayed via a render-texture on an in-world screen. Hope thatâs of some use
EDIT: I subsequently discovered some quite important caveats!
The first, I have added to the code above (you need to check the raycast has hit the screen!)
The second is that you must disable the Graphic Raycaster on the canvas thatâs generating the screen content (the one referenced as âscreenCasterâ). If you donât, then clicks off the virtual screen will be handled by that raycaster as though they were on its own canvas. For example: if there were a button in the top left of the virtual screen, you would be able to click it by clicking the top left of the game screen, as well as by clicking the top left of the virtual screen. So: disable the graphic raycaster on the proxy canvas.
I just copied and pasted it back into a new project of mine and it compiled fine. Make sure you copied everything exactly, including this line:
public class VirtualScreen : GraphicRaycaster
GraphicRaycaster is the class that contains the Raycast method the code overrides. If youâve created a blank class in Unity and tried to paste the methods into it, your blank class will be extending MonoBehaviour rather than GraphicRaycaster.
@Peeling
Hi has someone solved it?
The if statement is never trueâŚ
Ray ray = eventCamera.ScreenPointToRay(eventData.position); // Mouse
RaycastHit hit;
if (Physics.Raycast(ray, out hit))
{
// Figure out where the pointer would be in the second camera based on texture position or RenderTexture.
Vector3 virtualPos = new Vector3(hit.textureCoord.x, hit.textureCoord.y);
virtualPos.x *= screenCamera.targetTexture.width;
virtualPos.y *= screenCamera.targetTexture.height;
eventData.position = virtualPos;
screenCaster.Raycast(eventData, resultAppendList);
}
Thatâs not a fault with the code; it must be a problem with your scene.
You need to set it up as follows: your TV/phone/whatever object needs to have the following on:
The âScreenCameraâ and âScreen Casterâ need to be set up to point at the camera doing the RTT and the Graphic Raycaster on the canvas used to render its content.
Thanks for replying to me!
But it still doesnât work, I mean it detects a hit from raycasting but the UI doesnât interact. Iâll try to explain what I have.
I have one camera and a canvas that regards this camera (The canvas has a lot of UI like buttons, text fields, and moreâŚ).
This camera is displayed by rendering texture with RawImage on another canvas.
The issue is that when I try to use the UI in the RT - it doesnât detect any interactionsâŚ
So I tried to use your solution like this:
To the canvas that has the Raw Image I added the script âVirtualScreenâ and attached to it:
Screen Camera - the camera that has the render texture.
Screen Caster- the graphic raycaster of the canvas of the camera that has the render texture.
Now I added the MeshCollider and tried to change the canvas to WorldSpace but still doesnât workâŚ
Share your project and Iâll take a look, if you like. It sounds as though you have things on the wrong objects.
The camera and canvas that are rendering to the texture donât need anything adding to them.
The virtual screen (the object with the texture on it, the one you want to click on in-game) needs a mesh collider, a world-space camera, and the virtualscreen script.
My render texture is on a UI canvas instead of being in WorldSpace. Itâs part of the GUI. This only works if your RenderTexture is in worldspace inside the scene itself.
Itâs the other way around. The scene needs to be on the RenderTexture. Think of it as playing on a TV or something.
You remember those old DOS dungeon crawlers? Iâm trying to do something like that. On the left side, you have a window with all your characters, hp and mp and so on, on the right side you have your stats, inventory, minimap, compass, etc. On the bottom you have the message logger, ie: âyou entered the crypt of the abyssâ or âYou attacked the goblin for 32hp of damageâ, etc.
And just on top of that, you have your âGame Screenâ, which is the scenes themselves. The whole 3D world and all that. I did it so I could position the game screen snuggly in between the other âviewsâ and not have it go bonkies with different resolutions and whatnot.
However, sometimes I need to raycast into the scenes to be able to pick up items, click on things for puzzles, hover on objects to highlight them, etc.
Ah, now I understand. Thatâs not really the same problem as this code is intended to solve.
Personally, I would stop using a rendertexture and use a script to adjust the game cameraâs viewport to fit an invisible rect autosized to fit that area. That will then allow you to cast rays by detecting clicks on the rect and converting the mouse coordinates to a world ray using the game camera in the usual way. There are a few methods you can find online for getting the screen space coords of a rect.
Wanted to express my gratitude to @warpom and @Peeling for this post and the setup info. I was using a canvas to do some UI stuff and this allowed me to move it to a âTVâ screen in world without a lot of effort. Works perfectly.