Implementing Camera Shake Effect in visionOS Mixed Reality Applications

Hello ,

I am a developer working on mixed reality applications, currently using Unity’s PolySpatial component for app development. In my past development experience, to create a more realistic impact sensation when a user hits an object, I often used the technique of shaking the main camera, which proved very effective in enhancing the feeling of impact.

However, I’m now facing a challenge in replicating this common effect in the development of mixed reality applications for visionOS. My question is: does Apple provide an interface that allows applications to access and manipulate the real-world background imagery, specifically to create a camera shake effect?

I am eager to understand if there’s a way to implement this in visionOS, as it would significantly enhance the user experience in my application.

I sincerely look forward to any guidance or information you can provide.

Best regards

Interesting question! I’m not aware of any way to do this via the RealityKit API (which is where I would expect to find it), and I suspect that Apple might not want to expose a way to do this because of the potential for abuse/motion sickness. However, it might be possible to create an effect like this (in unbounded MR mode) by temporarily surrounding the viewer with a sphere (inverted, so the faces are visible from the inside) that resamples the environment. We will be adding support for a shader graph node, “PolySpatial Environment Radiance”, that allows access to Apple’s image based lighting, but you can also get the effect by making a mirror-like material (Metallic: 1, Smoothness: 1) and manipulating the fragment normal: a normal that’s orthogonal to the view direction will cause the lighting calculations to sample the environment map in the opposite of the view direction, creating a transparency effect.

Thank you for your reply. As you mentioned , using the unbounded mode can achieve the expected effect, which will indeed be helpful. In fact, my development is mainly based on the example files from polyspatial, where it can be seen that in almost every scene, the volume camera is set to bounded mode. Therefore, I have a question: what are the respective advantages and specific application scenarios of bounded and unbounded modes in the development process of mixed reality?Moreover, I’ve noticed that under the unbounded mode, the entire app is unable to be dragged.

That’s kind of a broad question, but the basic difference is the unbounded applications are meant to be exclusive, like full screen apps: they cover the entire space when active, which is why you can’t drag them around. They also have access to additional information from visionOS, such as the viewer (camera) position. Bounded apps are meant to run simultaneously, like windows on a desktop. For the development process, I have found bounded apps to be somewhat more convenient just because they always open in front of you (versus unbounded apps where you might have to look around to find the content). However, which mode you’re likely to use is very application-dependent.