What performance are we looking at and can we have the volume encompass the world? or is performance based on size and so forth? In a nutshell, I’m nuts about performance as I have a strict budget on console that only allows basically a millisec or two
There is no such notion as bounding volumes. The whole world, universe, multiverse are encompassed by default and undergo base parameters (base density, base anisotropy and base color) set on the main component on the camera…
Volumes are only there to inject(Add/Subtract) density, anisotropy and color locally (except for the “global” volume), according to their shape and some other parameters.
Computation is based on a 3D divided, ranged camera frustum so performances are (roughly) = volumetric subdivision (read volumetric accuracy, cell size, …) + (the number of volumes/lights you add * their volumetric coverage (so yes the bigger a volume is, the more chance it has to be taken into account for cell computing)).
PS4 and xOne will be supported and my goal is less than 2ms on a PS4.
how does the directional light coockie make sense? one wants something like this as a local effect. doesnt it make more sense for a spotlight for example?
I agree (still not 100% though) that the fan effect “might” make more sense with a spot but it was a good contrasty example in my test scene.
Directional cookie makes sense as I could have used a “cloud” cookie for example…
It’s not up to me to decide what people need or what makes sense for them…
My goal is to support all the unity lighting pipeline as it is so they will be able to combine everything however they want, and achieve whatever they want…
I just keep in mind that lights are not just used to mimic real life lightings but can be used to do crazy effects too. And if this can help them achieve event crazier effects by not restraining the possibilities, I’ll be happy.
I’m a technical artist, I’ll always keep in mind to keep the artistic possibilities at the max…
Could you elaborate? Why would there be problems?
The effect is not just a post process, of course it can be one for opaque geometry and save pixels on the Fog application, but it can also be fetched into a shader and applied per instance/vertex/pixel and take care of the specificities of the objects in a custom shader if you want. That’s how I fog and lit the particles for example
You can use volume injections for that
Just like shown in the first video, you know, the first one, the loooong 14minutes video (I know, I was too enthusiast )