Is there a way to compress what the unity camera sees in realtime? I guess it can be achieved by getting a render texture from camera, exporting it, then re-importing it with lowest compression settings but that would heavily drop the FPS.
So instead is there a way to compress the raw image from camera directly ? I only want the lowest and fastest compression settings to get those camera artifacts that you see in actual videos.
What about other effects than the datamosh effect? I’m trying to make a horror game about liminal spaces (kinda like the backrooms), and I want to make it look like it is a video/image being recorded by a medium quality camera.
There are a ton of “glitch” or “vhs” post processing assets on the store. It really depends on what kind of effect you’re looking for. It’s kind of all about faking whatever artifact you’re looking to have with noise textures or pseudo random functions. If you want to be really fancy you can recreate some of the artifacts of video compression by actually doing some elements of that compression in a shader. Things like converting the rendered color to YUV 4:2:0 (or worse) and back to RGB to get color resolution artifacts. Might even offset the YUV channels to simulate a bad analog signal. Etc.
I did get to somewhere by making the UVs mosaic depending on how detailed the RenderTexture is so that the brightness and colors closer to each other appear compressed as you’d see in a video artifact.
how do you make the uvs mosaic? sorry, i’m really new to unity and just wondering where and how to do that, whether it’s the render texture itself, shader code or something else