I want to reproduce this blurry background that Seasons after fall has:
I used Unity’s post-processing stack Depth Of Field’s module but it didn’t affect my sprites based on their z components, everything was either blurred or sharp.
What can I do to achieve a similar effect?
Is this the only way? Something based on the Z position would be much faster for my artist.
I recently saw someone creating a shader that blurs anything behind it, maybe that’s one way?
Set the background art to use cutout instead and DoF will work. The reason it does not work right now is because transparent stuff does not write depth by default so… Depth of Field doesn’t work. You will lose partial transparency.
Cutout (alpha test) does work though.
Personally? I’d go with blurring source art for that particular game. Seems like the easiest way?
Losing partial transparency is a no no in my game’s case. Thanks for pointing that out though.
I don’t really understand why that would be this costly for the player’s gpus since 3d games have blurred objects in the distance all the time, is it because it’s done on the camera rather than on the game objects?
And in these 3d games, what do they do then when they have transparency on certain props?
How about: pre-blur your textures in a script as a custom build step? Maybe generate a few blur sizes per Sprite, if it’s going to by dynamic in your game.
Then just pick the right blurred texture based on z distance, during rendering.
Then it’s easy for your artist, easy to tweak blur strengths via script, and fast for whatever gpu you are targeting.
Only downside is memory cost if you want to store many blur strengths per sprite.
So a quick explanation for why we’re all basically saying “don’t use DoF image effects”. It certainly would seem like that would be the easiest solution, and indeed many games do this.
The problem is depth of field effects work by rendering the scene “in focus”, then using the depth of each pixel (as stored in the camera depth texture) to blur that “in focus” image. This works on opaque, hard edged geometry, but not on anything with alpha. The simple reason is the depth texture can only hold one depth value, but multiple overlapping objects with partial transparency means each pixel may actually have multiple depth values. You could use an approximation of the depth, but you’ll always be dealing with unexpected hard edges. Real time depth of field on semi-transparent objects is still kind of an unsolved problem. The easiest solution is to blur everything before hand. Some people have implemented solutions where they blur each sprite in their shader in real time, but this is much slower than the depth of field image effect as you’ll be doing this calculation on multiple depths per pixel, and often for pixels that won’t ever be seen.
Is it complicated to generate a new texture from inside Unity, using Unity as if it was Photoshop? based on an existing sprite (made in PS) + a shader (made in Unity).
This might alleviate lots of optimization issues since quite a few times I won’t need the shader in real time. It could be useful for simple things like maybe blending the color of a sprite with another texture or a color, things of that sort (overlay, hard light…).
I know that stuff could be done in PS obviously, but there’s some advantages to seeing the effects in the scene.
It is not complicated to make yourself a tool to blur a bunch of images. But it is tremendously more complicated than batch running an action in Photoshop (basically an ironclad version of the thing you’d be re-making).
Been struggling with finding a way to do this for a while now, and luckily found upon your article @TheSixthHammer
Would it be possible to provide a few more examples on how you did the ratio calculations for figuring out the best way to scale down sprites before blurring them?
Testing this out now, but i do have a variety of sizes on my sprites and would love to get a bit more insight instead of experimenting for ages
What i am most interested in is how you dealt with the scale-down vs blur ratio when upscaling the original sprite. From my initial testing i got good results from (as an example) downscaling a 1024px sprite to 256x with 2px blur and tested the upscaled version of that against the 1024px version with 8 px blur. Looks good.
Where i fall of is that when moving the sprites further backwards in z-space and need to scale them up based on their position. Did you use a various of sprite sizes with the same blur ranges to achieve an upscaling to the corresponding upscaled size? As in (with the above example) one 256px version with 2px blur to achieve 8px blur when upscaled x4 (1024) and then 512px with 2px blur upscaled 4x (in this case is should fill 2048px screen space)?