How much does Unity actually expose?

Hey there, forum folks!

Because I don’t think many of the more experienced users / devs visit the Unity answers frequently, I’m asking here…
How much of its insides does Unity expose? To what extent can you alter the rendering pipeline? Because it seems to me that the number of possibilities is awfully small. For example, I wanted to implement this technique: http://www.vrvis.at/publications/pdfs/PB-VRVis-2009-022.pdf
tl;dr: It’s a technique of rendering accurate soft shadows with very little cost. Instead of rendering multiple shadow maps with different offsets every frame, it renders them over time, accumulating the result in a buffer, which is then used for the lighting pass. This results in high quality shadows with dynamic lights that don’t move rapidly, such as the sun crossing the sky.

But well… it seems that it’s not possible. As far as I know, and please correct me if I’m wrong, there are only two ways you can fiddle with the rendering: shaders and off-screen buffers. You can’t change how lighting is done, you can’t implement a better shadow rendering method, you can’t access the G-buffer, you can’t fiddle with the stencil buffer, you are limited to the AA methods you can include… basically only to those purely post-processing and the good old MSAA, which the GPU happily handles for you. The shaders are also limited… you can’t, for example, write to depth and stencil buffers, nor output multiple color values. Every time I see a nice algorithm, the first thing I’m asking is whether it can be done with Unity… and most of the time the answer is, sadly, no.

Sorry if this sounds like ranting… Perhaps I expected something else from this engine. It’s awesome in many ways… creating a scene is blazingly fast and the scripting is very intuitive so far. It shines if you are satisfied with good results, but once you want just a bit more control, so you can optimize for your user case or introduce a technique that’s not already included with every Unity installation… Anyways, my question is, Is there really no way to squeeze just a bit more out of it? I’m aware of the overloadable internal shaders, which I’m really glad for… are there any other “gotchas”? If not, do you plan to broaden the amount of control available in the future? Do you have any idea how to implement the paper I mentioned?

Thanks! And I apologize once more… It’s no doubt you put a ton of work into this.

Oh?

–Eric

Alright… I should have elaborated that one…The shadow rendering can indeed be easily modified in all your everyday shaders, because the backend Unity uses is open-source (cgincludes)… I believe the only reason for that is that it HAS to be this way or else the surface shaders couldn’t compile. You can even set the shadow read and write passes or overwrite the internal shaders for combining and sampling the shadows, so I believe you could blur them even more if you wanted to, but you have no way to affect the generation of the shadow map itself.
What that demo looks like is just the old jiggidy jaggidy Unity’s shadows with extra volumetric scattering and extra attenuation of the bright areas to simulate overcast shadows as a cherry on top. I refuse to call those few pixels a proper penumbra.

I tried hard to find a method that would work with Unity and looks or performs better than this one, but couldn’t find much, so lets stick to those that don’t. Skipping all that don’t use shadow maps: