I’m outputing a procedural mesh with Graphics.DrawProcedural.
All is fine, uv, normals, vertices, … lighting is ok too.
But I’m not able to cast or receive shadows on this procedural mesh.
I’m correctly using macros like LIGHTING_COORDS, TRANSFER_VERTEX_TO_FRAGMENT and LIGHT_ATTENUATION but nothing seems to work.
I found some examples here:
Apparently, there is something with FallBack “VertexLit” that helps shaders to cast/receive shadows but since my material is called like below and not from a mesh renderer, I think there is a problem to handle light and shadow data from unity engine:
I am not using Procedural drawing but I do have shadow issues even within MeshRenderer.
I have not tried it out yet, but found some “LightMode” pass tags to be worthwhile inspecting.
Am referring to the: ShadowCaster and ShadowCollector.
Are you able to embed these somehow in the shader?
I assume you have these set up properly; but anyway I’d like to point out the Quality settings for shadow: Specially the Distance one
I tried many things, including ShadowCaster and ShadowCollector tags… but no shadow or light attenuation.
Here is the goal: … casting/receiving shadows on people.
This crowd is drawn with DrawProcedural (more than 1.000.000 vertices).
In that way, I can avoid the 32bit limitation of Unity (65.000 verts max per mesh) and I don’t need to split crowd into separated meshes.
The shadow distance is also correctly set but don’t give attention to that video, there is no shadow at all in that preview.
Any news on this guys?
Would be great to use lighting/shadow in DrawProcedural() … espacially shadows, I have some difficulties to handle shadowmaps by my own
Bump, yes, it would be nice to know if Unity will be fixed so Mesh/MeshFilter/MeshRenderer can support the resource limits of D3D11 etc.
Modern graphics cards can process huge numbers of primitives. In fact I was looking at this just 5 minutes ago - 150 draw calls, half with the same material, and the GPU barely registered. CPU on the other hand took 3ms to dispatch them. This may not seem like much, but x2 for update depth buffer, x2 of that for shadows x2 of all of the above for stereoscopy and it adds up.
This is important in emerging applications like VR, where a consistent frame rate is of utmost importance. In VR there isn’t time for the CPU to do dynamic batching and culling on larger scenes, whereas GPUs can take millions of triangles with no slow down. There is a very easy and obvious solution here, but Unity’s 16 bit limit is sitting smack-bang in the way of it!
I’ve used this recently, DrawProcedural can work with lighting just fine. Plug it into a command buffer and draw it into the deferred pipeline to get lighting.
However, shadows cannot work. The reason being that a shadowmap is generated by a camera render; this is a DIFFERENT camera than the one the draw procedural worked on. The DrawProcedural is a direct instanced call to the GPU to render geometry. It renders it into the existing buffers. The geometry does not ‘exist’ in the game world, which means it does not exist for a shadow camera to see, unless you also render the geometry for that camera. And if you’re rendering the geometry for every camera, you’re going to be drawing those millions of vertexes on every camera. For 4 lights, that would be rendering the sets of geometry 4x more times (if you’re pushing geometrical limits with a million vertexes, turning that into 5 million -will- stress the GPU, except in all but the very highest end cards).
Yes, it would be nice to have the functionality to add the draw procedurals to the shadow cameras, but remember that for every light with shadows, it would be drawing the entirety of your geometry again.
This one could sound a bit obvious, but… Wouldn’t you just render the object twice, one for shadows, and one for the geometry? Most things need to be drawn twice anyway for shadowing, so it only makes sense to do that.
Remember that shadows need to be interpolated for different angles! It would need to be rendered once for geometry, from the camera view, and then once for each angle that it is being viewed at for shadows. You can’t just render it again for shadows, as it is being rendered from the perspective of a second camera, for each light source.
Drop in unity’s realtime shadows - notice that for each real time shadowed light, it increases the draw calls for each light, not just once for everything. Add 3 lights hitting 10 objects, and it’ll add 30 draw calls - because each one is getting drawn again, from the perspective of each shadowmap rendering camera.
Yep. Theoretically that is possible. However, if you’re pushing limits with DrawProcedural to draw vertexes far faster than you ever could with normal meshes, then you’ll probably run into issues drawing them multiple times. That’s the issue I’m finding, since I have my own shadow rendering system set up, adding my draw procedural calls to the shadow cameras is far too intensive.
Umm, shadowmaps are done in 2 steps, 1 from the light, and 1 from the cameras perspective. You only get 1 extra draw call per camera if unity has done things correctly. However you are correct that the number of draw calls is lights*shadowcasters, but then it’s + number of cameras. (Unless unity is acting stupid…)
It does not cache the shadowmap between cameras, because they’re completely separate and distinct rendering pipelines, per camera. Arguably it should, but there’s always complications.
However, what I meant by camera was the shadowmap rendering camera - the camera you don’t ever see or interact with, or most people even know exist. Basically, it does a render of the scene from the position of the light to get the shadow map - so when I was mentioning how it’s each drawn again, from each shadow camera, I meant how it renders the scene from the position of each light, to build the shadowmap. That is why the draw calls is lights * shadowcasters, those draw calls are there because of the additional renders of the scene (though they are basically depth passes, not as complete as a normal shader pass).
This would not work, unless you draw them into the shadowmap yourself, because that’s after the shadowcasters are rendered; but the shadowcasters don’t render with geometry from the DrawProcedural call. They don’t know that geometry even exists.
And that’s the reason for what I suggested, adding a command buffer that does the DrawProcedural call at the LightEvent.AfterShadowMap would cause them to be drawn INTO THE SHADOWMAP, as it’s still the bound target!
(It’s something unity added recently)
And I’m pretty sure they also added the ability to draw prodcedural from inside command buffers.