I’ve been working on a visual training project for some time now, and one of the biggest problems we’ve been dealing with is visual clarity of objects in the scene. The overall quality of what can be seen is very poor, but the most egregious cases are when trying to view objects close to the outer edges/periphery, in which case there is severe distortion of the image, and it is anything but clear. In addition, text in particular becomes very difficult to read, and exhibits weird effects such as shimmering, when not close to the camera.
On the topic of viewing objects at oblique angles towards the edge of the screen, I can see that this is at least to some extent due to the nature of the Quest 2 headset, because I can see much of the same distortion when trying to look at things at the corner of the screen in the headset’s main menu, for example. However, I am thinking that there is surely something I am missing in terms of settings/config within Unity, because the clarity is so poor that we simply would not be able to release anything at this stage, and I know that other projects are able to achieve a much higher visual quality standard.
If nothing else, the single biggest priority is that the “target” (typically a tennis ball with an X on it) is as clear as possible to the user. Currently, for example, when I try to view the target at the corner of the screen, it is blurry and distorted, which ruins the experience. I have tried to configure as many project settings as possible to confirm to Meta’s recommendations for Unity, but I am wondering if it’s worth exploring texture-specific settings such as aniso level, filtering, etc.
I am aware that I have not provided specific information relevant to the question at this time (so as to not congest this post any further), but I would be more than happy to do so if someone wants to explore a certain topic/subset of settings. While I have a decent amount of experience with Unity in general, I am certainly quite lost when it comes to issues of rendering, especially in a 3D VR context. Help/guidance of any form will be greatly appreciated, thank you!
Do “commercial” games or scenes appear much “clearer” than your efforts?
Maybe it’s a common complaint due to VR still in it’s infancy where resolution is concerned?
I’m guessing that a more advanced headset is not available to compare?
Can you give a screen shot or a photo of your efforts? Maybe some more details of the “scene” and your settings etc?
All the best!
The nature of fresnel lenses will always mean reduced clarity the further you are outside of the “sweet spot” in the center of the lens. If you try a Quest Pro, you’ll find it already has greatly increased clarity across the whole field of view.
While you can’t get around the small eye box on the Quest 2, there are several design choices you can make to improve it. Firstly, you can have an indicator closer to the center of the screen to guide the user to where they need to be looking. Secondly, if possible, increase the render resolution. To render something in VR, the rendered image is warped to counteract the distortion of the lens. If you render at exactly the resolution of the displays, as it does by default, this actually means that the warped image will have a lower resolution. If you have enough performance headroom, or make sure you have it by using Application Spacewarp, you can increase the render resolution for improved clarity.
Relating to that, you should also use compositor layers for text and other 2 dimensional graphics you want to show with maximum clarity. These compositor layers are a OS feature that renders the content to “ray trace” through the lens, making an enormous leap in clarity.
As the poster after you shared, this is almost certainly a hardware/lens limitation that I was somehow not aware of previously. The photo below is just a quick demo to illustrate the problem. With the headset on and keeping the head still, only the first 3 balls in each direction are visually clear. It definitely seems like there’s not much that can be done here, but I definitely want to make sure that FFR is off in this case. While I’m pretty sure that it’s disabled by default by Unity/OpenXR, how can I ascertain for certain that it’s off?
Unfortunately one of the core use cases for the project is that the user’s head and eyes are not looking in the same direction (saccades). I will definitely try to increase the resolution though (I assume this should be doable with OpenXR as I saw a method for it in the docs; hopefully the device respects it). Compositor layers definitely look a bit daunting (and would require switching to the Oculus integration which is fine)… Is there any way it can work for the 3D objects as shown in the screenshot in my reply above, or is it strictly for 2D? Cheers.
It’s strictly 2D. Since eye movement is central to your project, please consider purchasing a Quest Pro instead. It has greatly improved edge to edge clarity, and eye tracking to boot.