I’ve heard that post processing effects should be avoided in VR. Can anyone give an explanation for why?
The Oculus Rift best practices (https://developer.oculus.com/documentation/intro-vr/latest/concepts/bp_intro/) only says this:
- The images presented to each eye should differ only in terms of viewpoint; post-processing effects (e.g., light distortion, bloom) must be applied to both eyes consistently as well as rendered in z-depth correctly to create a properly fused image.
“Post-processing effects” covers too many things to be considered something to avoid across the board in VR. I mean, that would even include stuff like some types of anti-aliasing and color grading, which I assume would generally be fine in VR.
Stuff like depth of field, bloom, lens flares, dirty lens effects, chromatic aberration, and vignetting could be really problematic in VR, depending on how you handled it.
Are there specific effects you are considering using in VR?
The original post was not in relation to a specific project or idea. I’d just been hearing lots of advice along the lines of “don’t even try post-processing in VR” and was skeptical. Thanks for the clarification!
Edge Detection works well and I’ve experimented with using chromatic aberration and blur combinations over a split second when you take damage in FPS style game, works well if you use these subtly. No harm in trying anything out for the right reasons. Motion blur however, can’t find a good use for that yet.