We’ve been trying to take high quality screenshots and video of our fully immersive URP-based app, following these instructions:
Whenever we go into capture mode with the Reality Composer Pro tool, things only render out of one eye on the device and look all warped. The resulting capture only shows the passthrough hands and a single horizontal line of visuals.
Has anyone managed to get good results with this tool? We are on the 1.0 OS release and using Xcode 15.2…
Hi there! Unfortunately this is not something you can work around on your end. We will need to add explicit support for this feature. In the meantime, have you tried making recordings directly on the device? You can go to Control center from the indicator menu (look up and pinch) and start a recording by long-pinching on the record button. This will save video recording (or screenshot if you just do a regular pinch) to your device, which you can share to your laptop with AirDrop. At the moment, this is the only way I’m aware of to capture video from the device in a fully immersive (VR) Unity app.
Right. I should also mention that this might bump up against another issue where a VR app will crash if it is not connected to the debugger and you open the control center. You don’t need a debug or development build, you just need the debugger attach. You can either run your app from Xcode in Release mode, or attach the debugger to a running instance of your app by going to Debug > Attach with XCode connected to the device.
thanks for the response! I have used the on device recorder functionality, but it does leave something to be desired for marketing materials. Would be great if this could be supported in the future - I just submitted it as an idea for the roadmap.
Nothing yet. We’re starting work on this soon but it’s not a trivial task. This mode requires the app to render different resolutions to each eye, which isn’t something we have had to implement for other platforms. As such, there are a number of places that need to be updated where we’ve made the (perfectly reasonable, IMO ) assumption that both eyes render at the same resolution.
I’ll circle back on this thread when we have an ETA for supporting this feature.
If you disable Foveated Rendering under Player Settings → XR Plugin Management → Apple VisionOS and then rebuild you can at least capture proper visuals from the left eye (right eye is still warped while recording).
We’re still working on a proper fix, but with com.unity.xr.visionos@1.1.4 we provided the ability to disable foveated rendering. This allows the workaround that @jdiehl_unity3d described above:
You should be able to make 4k recordings this way. I just recommend closing your right eye so you don’t get a headache!
Just to be clear, this thread/forum is about fully immersive VR apps built with Unity. This app mode does not use Polyspatial, which is our technology for Unity to use RealityKit for rendering in mixed reality. Are you trying to record a mixed reality app with Reality Composer Pro? As far as I know, that should work just fine.
Disabling foveated rendering reduces the overall app resolution considerably. I’ve also noticed a significant drop in performance when recording in this mode. Any way to get around this? Apple is asking me for promotional materials for my app using Reality Composer Pro and I’m at a loss on what to do. I really don’t want to lose this oppertunity with them.
Is this true in the final recording? Doesn’t the Reality Composer Pro capture process bump the resolution up to 4k? I’m pretty sure you have to disable foveation for this to work, since they don’t want foveation artifacts to be visible in the final capture. Unfortunately there’s not much else we can do about this until we fix this properly within Unity. We’re limited by how the platform exposes these rendering features, and it’s proving difficult to get Unity to work properly in this situation where we have to render stereo but with different resolutions in each eye. At this point, there is no better way to capture footage from a VR app on visionOS built with Unity.
I managed to capture some 4K footage by following these instructions today. It’s super disorienting (and reality composer needs to wait for the headset to cooldown after like every attempt), but it does work!
One tip - our performance was pretty choppy when I first tried this, but turning anti-aliasing down to 2x (from 4x) in the URP config asset got it smooth enough for us. Looking forward to an official solution, but glad there is at least a path
Actually I may have spoken too soon about getting smooth performance by dropping to 2x MSAA - seems that happened once, and only once!
I’m getting choppy performance again, is this inconsistent or poor performance during Reality Composer capture happening for others? I figured that generating a non-foveated 4k frame is a lot more GPU work than usual, so wasn’t super surprised to see performance dip. Are there any other workarounds folks have found?
Do you have any updates on getting this working correctly?
Apple has asked us for this and this kills frame rate of the capture. (We are letting Apple know)
Opened
Unity Support Ticket for Developer Capture: IN-74980
I have used the on device recorder functionality, but it does leave something to be desired for marketing materials further go and watch this https://toploky.com/.
I assume no progress on this since the last update earlier this year? We still struggle to get good 4K footage in our fully immersive app without additional levers to improve performance during Reality Composer Pro capture. I can appreciate that this might be difficult to deal with on the Unity side, any updates are much appreciated!