Is the following possible in Unity? I’m playing around with trying to make that silly shadow mapper…
This is what I think needs to happen in order to create a shadow map;
-
For an object casting a shadow I need to ‘render to texture’, from a light position - and instead of rendering what the camera sees I need to encode the depth of the object (shadow caster) it sees in the texture. I need to encode it because we don’t have floating point textures.
-
Then from the view point of the regular game camera I need to render that object from step 1 and project the texture shadow map (from step 1) from the light position. Then I compare depths to determine if the pixel is in shadow or not. This means I need access to the depth texture map from step 1.
So how do I go about doing this in Unity? In RenderMonkey, for example, you create a depth pass that renders the depth to a texture for an object. Then you create another pass for the same object, but this time you render with the view camera. In this second pass you access the renderTexture from the first pass to see if the object is in shadow.
I’m having problems with logistically doing the same in Unity. In Unity I can assign a material to an object and the object can have a shader associated with it. I can then create multiple passes in that shader. But how do I render to a texture in the first pass, and then render to the regular viewport in the second pass? I don’t think multiple passes like this are the same as RenderMonkey’s, are they?
So instead of doing that, I tried it so that I have a render to texture camera that is parented to the light that can create the depth map into a renderTexture (called ShadowMap). It renders first, before the main camera. It is linked via javascript to a material which I can then assign to an object that I want to cast shadows. Using javascript I pass the camera texture matrix that I calculated on to the material and then down to the shader.
Ok, so let’s assume that object now has a ShadowMap that it can access… well now how do I actually render the object from the regular view? If I use a second pass on the material that I just assigned to my shadow casting object it’s still rendering from the render to texture camera. So I can’t do this in the second pass of that shader. So what do I do? Create a second material and assign it to the object and have that render from the regular view camera? Will this work? Or am I missing something really obvious (i.e. missing the forest for the trees)?
Any help would be appreciated.
Cheers,
Paul Mikulecky
Lost Pencil Animation Studios Inc.
http://www.lostpencil.com
You’re on the right track. Swap the shaders from the shadowCamera (remembering to swap them back in OnPostRender).
For the actual object, just assign your custom-made shadowed shader. The rendering process will look like this:
1: The shadowing camera:
1.1 You have a custom script on the shadowing camera that swaps the shader on shadowcasting objects to be the Shadowed shader. (implement OnPreRender)
1.2 The shadowing camera does its rendering
1.3 You custom script swaps back all the shaders in OnPostRender.
2: The main camera.
It renders the object normally. Any objects recieving shadows need to have a custom shader that gets the global shadowmap and does the setup.
Details:
To get the wiring right - when rendering per-pixellight, you need to get the shadowmap for the light currently being rendered in this pass. Set the light’s cookie to be the shadowmap. The normal shaders only read an occlusion value from alpha, but you can assign a texture just fine and sample it in your fragment program.
All objects that should be able to recieve shadows need to have a custom shader on them. This one would get the cookie (now shadowmap) and sample the depth from that. Then you do your compare in the fragment program and you’re done.
A few notes:
- If you’re using an encoded RGBA texture for the depth, it is important to turn off filtering.
- If you want to use the built-in shaders for unshadowed objects (you do!), make sure the shadowmap has a white alpha (otherwise they get confused). This leaves you 24bit for depth, which should be plenty.
============================================
What is it actually you are trying to accomplish? Is it full shadowmapping with self-shadowing, etc? Or is it making a real-time shadow from your main character on to the ground - the second thing is MUCH simpler, and runs on just about any card out there (faster as well)
Ahh, that’s what I’m missing - swapping the shaders. Cool. Ok, I’ll have to play around with that next weekend! Thanks for the reply, that helps a lot.
I’m trying to do the full shadowmapping with self shadowing route… figures I’d try the more complex one first. Why is it so much simpler for the ‘just the characters’ simple shadow on the ground’ method? Wouldn’t it be pretty much the same process for both?
I think I’d be happy with just a shadow on the ground without the self shadowing - especially if it meant that it would run on more systems.
Cheers,
Paul Mikulecky
Lost Pencil Animation Studios Inc.
http://www.lostpencil.com
The main reason the other one is simple is that you can use a projector for that one.
It basically goes like this:
- Set up your shadow camera, but don’t do any shader swapping. In OnPreRender, you just set pixellightcount to 0 and reset it in OnPostRender
- On PostRender should turn the image into a slihouette like this:
** Render a fullscreen white quad without depth texting or writing to depth
** Render a fullscreen black quad at the far plane. It should only draw when something is in front of it (ZTest Greater).
Now, you have a texture you can whack on a projector. This will handle rendering the projected stuff on top of the normal geometry.
With this apporach, you don’t need any fragment programs. You can use all the builtin shaders just fine, as the shadow is post-processed on to the geometry.
This way of doing things will not look quite so good - but it’ll be fine for doing a sunlight (works great for Oblivion)…
Neat. The concept is cool, but my head has been so involved in fragment shaders that I’m a bit confused. When you say,
What do you mean? Have a white quad hidden, and then unhide it in the PostRender? But then how does it render if it’s in PostRender? Sorry for the silly question.
Cheers,
Paul Mikulecky
Lost Pencil Animation Studios Inc.
http://www.lostpencil.com
[/quote]
You can execute normal OpenGL code inside PostRender. so you just draw your quads from there.
(History: The GooBall glow was all done inside PostRender as we did not have the ImageEffects base code in place back then - this means that you can do just about anything there)
Code would look something like this:
;
public Material material;
void OnPostRender () {
GL.LoadPixelMatrix ();
for (int i = 0; i < material.GetPasscount(); i++) {
material.SetPass(i);
GL.Begin (GL.QUADS);
GL.Vertex3 (0,0,0);
GL.Vertex3 (Screen.width,0,0);
GL.Vertex3 (Screen.width,Screen.height,0);
GL.Vertex3 (0,Screen.height,0);
GL.End();
}
}
Note: this is untested, but if you go this route, just ask away
Oh cool! Thanks Nicholas. I’ll play with it and if I get stuck again I’ll be sure to ask. Thanks!
Cheers,
Paul Mikulecky
Lost Pencil Animation Studios Inc.
http://www.lostpencil.com
AWESOME! Thank you Aras! (and Nicholas!). You guys totally rock. 
Cheers,
Paul Mikulecky
Lost Pencil Animation Studios Inc.
http://www.lostpencil.com