Currently, still working on my project with unity 2018.2 (i am in production stage), I want to move to a more current unity version.
Unfortunately, after testing, it seems that the forward single passmode for VR in Built In pipeline does not work correctly with any new version of unity. Do you have any leads or information about this?
Iām very concerned about this, because Aura2 is one of the main graphic feature i used.
idk why i canāt get sun rays to look good. the sun rays look good. but like all the light up areas on the ground doesnt have shine down effect on to them.
Edit: Oh its directinal parameters.
Why is my lighting fuzzy though?
I am a shader noob and I donāt know how to create standard unity shader with transparency support that works well with fog from Aura 2. I know there are the docs and examples, but the example shader does not support normals, shadows etc.
Hello,
Iāve upgraded to Unity 2022.3.5. and it seems like Aura doesnāt show shadows at all from now on. Like it ignores walls. Fog doesnāt work. Settings from the previous version havenāt changed, shadows are enabled in project settings/quality, too. Doesnāt work on pc or consoles.
Im getting a lot of white bleed out whenever the camera moves (Upper Left of image). It fades after a second or so but looks really bad. Is there any way to avoid this? I tried lowering the Reprojection attribute but it really only helps if its at 0, which makes the lights look very low rez and jittery.
See the first page of the thread, the issue is discussed there, I am working on fixing it on my end, it looks like projection may be the issue, but check fps and other stuff.
I guess this plugin can support transparent object out of the box, if will be rewritten for post processing stack. (Best way)
Currently It renders with the tag [ImageEffectOpaque]?
This is the common problem with this type of plugins.
Because it injects before transparent.
Actually my thoughts is to render it into command buffer with different injection point. (Afterfinal pass? ) Or porting it into the PostProcessing stack. If you render it inside the stack it should correctly supports any transparency without need to manipulate the shaders.
Also, rendering inside the stack might brings some performance optimization, because currently its a CGProgram. I guess if you rewrite the shader in pure HLSL program format would be more faster. And would support URP aswell.
OMG! This is the first time I ever get a notification from here!
So basically yes, moving Aura after the transparent rendering pass is the first thing we would be tempted to do.
But the thing is that, even if it would apply as intended on transparent objects, they would not be fogged correcly and all your rendering would appear very very weird (the transparent pixels would receive the result of the opaque pixels right behind).
The thing is that we need the depth of the pixel for applying the volumetrics and transparent objects have no depth registered (only opaque objects write in the depth buffer).
To make this work we need, as it is now, to apply the volumetric on opaque objects as a post process, then for each transparent object, have a shader apply the volumetrics on top of the opaque result.
Hmm, I actually had experienced same problems with different post processing plugins, like Bloom.
Usually old plugin was written for use without post processing stack, i. E separate camera script.
The thing is if the plugin somehow manipulate the colors on the screen linearly (HDR) and then you trying to apply it in the project with PostProcessing Stack + color grading enabled which is common. It will cause bugs, oversaturation and so on, because PPStack converts screen color to LDR format.
While you dont have much options for injection in built-in pipeline, the rtpical workaround is to use [ImageEffectOpaqe] tag on render image function. Which puts the effect before PPStack allowing correctly utilize linear color space. But then obviously youāll loosing the transparency support.
So I recommend to make PostProcessing stack version of the script, and mark it as BeforeStack injection point. This will allows to execute it after the transparent pass, but before the stack performs LineartoGamma conversion.
I really understand what youāre telling me. But thatās not the solution.
It is not a matter of using linear or gamma space.
There are several issues preventing us to simply apply transparent objects in one pass :
In the depth buffer (one channel buffer), we cannot stack an unknown amount of depths with unknown transparencies in one single value.
It is like I would show you, on a paper, one single dot made with a pencil and then I ask you how many stoke it took to make that dot, and for every stroke, their strength and opacity. It is not possible.
That is why only opaque objects write in the depth buffer. Because we know there is nothing in front of them, and we cannot see behind them.
The second issue is that a post process is applied once for the whole screen. However, as we will have a different fogging value for every stacked transparent pixels and for the opaque pixel behind them, we need to apply the fog to each of them, in the correct order (back to forth) and with the correct mixing algorithm. It cannot be applied in one pass.
This is the result of similar volumetric effect rendering after transparent - the comparison between opaque dithering \ Transparent spheres without any manipulations inside the shaders. Thats what I like to have with Aura. Maybe itās not 100% accurate in color but I think far better that rendering it behind the transparent objects.
The reason why I asking is that I cannot edit all shaders in my project and especially the scenes that have Transparent Standard shaders.
Thanks for the chat, understand better your situation and why this solution would be more interesting for you.
Opaque dithering and all other alphaTest shaders should normally work out of hands without modification, as their pixels are opaque or fully transparent, they write in the depth buffer. You may find a starting Standard shader that you can modify to fit your needs and use in your project also.
Iād suggest to have the modification locally (just removing the ImageEffectOpaque attribute should do already).
But unfortunately, I hope you can understand, I cannot update and break thousands of projects that do it properly.
Iām trying to get Aura 2 working for PCVR with the built in renderer and single pass VR.
It was rendering pink originally with a shader error that I fixed by changing the camera depth buffer declaration in PostProcessShader.shader as shown below: