VFX Graph Painting with particles

Hi there. I was looking back at this topic about particles colliding with an animated skinned mesh and sticking particle decals to create blood splatter and thought of some alternative solutions :thinking:.

Instead of using decals, the idea would be to “Paint” :paintbrush: a texture with particles as “brush strokes”.

Unity_X0wtXK9TBs

It offers some advantages:

  • As the result is a texture, it can be directly used, modified or blended in the shaders of the Targeted Skinned Mesh.
  • Contrary to attached decals, no sliding should occur.

Before we start, this will involve Custom pass, and a Render Target, so it’s a bit involved technically and isn’t cheap either. But once set up, it’s fun to play with. Now, I would not recommend using it in production without properly profiling the performances.

That being said, how do we achieve this :grin: ? I will break down the process inside HDRP, but everything should be doable in URP with some subtleties. We need to transform the Particle Hit/Collision position into UV space. For this, we need to get the Skinned Mesh UVs information at the Collision position.

So first, as for the linked topic, we must collide with an animated Skinned Mesh, which sadly isn’t that trivial with GPU particles. Some methods have been evoked, like the demo team real-time SDF baker, but let’s collide with the Depth buffer.

Colliding with the Mesh:
In VFX Graph, this can be easily done by using the Collision Depth Buffer Block.


Unity_ZEotVpvXK8
But, when activating the Depth Buffer Collision, something odd happens. As the articles are opaque, they are writing into the Depth and collide with themselves. For now, let’s just put them into Alpha Blend.

With a high friction on the collider, the particles stop perfectly on the Skinned Mesh. But, sadly, if the Skinned Mesh is animated, it’s another story. :sob:

Particles Properly colliding with the depth buffer.
Particles collide with the depth but don't stick to the animated skinned Mesh

While the first collision is correct, the particles don’t stick to the animated skinned Mesh. We want to convert our Collision position from World Space to our Mesh UV Space. But first we need to render the skinned Mesh UV’s into a custom pass tin order to get them. We got work to do… :exploding_head:

Creating a Custom UV Pass:
To get the UV’s of our Skin mesh, we can create a custom pass, where we’ll be drawing our Skinned Mesh and overriding its Material so that it displays its UVs:

  • Create a new Layer and give a name (Uvs).

  • Assign this layer to your Skinned Mesh.

  • Create a Camera and give it a name.

  • Set its Target Display to “Display 2”.

  • Create an Empty Game Object.

  • Add a Custom Pass Volume component.

  • Set the Mode to Camera and Set the Target Camera to your cam.

  • Set the Injection point to After Opaque and Sky.

  • Click the “+” button and add a new “Draw renderer Custom Pass”.

  • Give the Pass a name (UVsPass).

  • Make sure to at least set the Clear Flags to “Color”.

  • In the Filters, set the Layer Mask to the previously assigned Layer (Uvs).

  • Create an Unlit ShaderGraph and set the UVs in the Base Color.

  • Assign this shader and set the Shader Pass to FowardOnly.

  • Override Depth and Write Depth to True.

Now that this is set up, we should see the Skinned Mesh in the Display 2 of the Game Tab.

Getting the UVs:

Now, upon collision, we can sample our UVs Camera and read its color buffer:

  • Create an Exposed Camera Property.

  • Bind it to your “UVs Camera” thanks to a VFX property binder or with your custom C#.

  • Drop a Sample Camera Buffer Operator and plug your Camera Property.

  • We need to transform our Collision Position to Viewport Position thanks to an operator:

With those operations, we are now getting the Color from our UVs camera that output the UVs of our Skinned Mesh. We are close :blush:
Unity_Z4FQRYuLtT

Transforming Position to Uvs Space:

Now that we get our Skinned Mesh Uvs, we can transform our Collision Position from World Space to the Skinned Mesh UVs space.

  • Create a New Output Context, a Wire to your Update Context.
  • In the Output Context, Orient your Particle so that they face the Y-axis.
  • Set the Alive Attribute based on the CollisionEventCount so that this output is only rendered with the first collision.

    *Transform your Position to Uvs Space by setting your Position thanks to the previously set Color Attribute.
  • Add an offset so that you can easily isolate it in the scene to Render It in a RenderTarget.

By filling the Depth Buffer with particles, we should recognize the UV layout of our skinned Mesh.

Rendering to Texture:
Now that we can see that our UVs information is correct and that we’re successfully converting the Collision Position into our Skinned Mesh Uvs space, let’s render our particles to a Render Texture.
Unity_584McCsTxD

  • Create an Orthographic Camera and position and orient it above your UVs space particles.
  • Create a Render Target and assign it to your camera.

You can now use this texture in the Shader of your Skinned Mesh. :partying_face: :tada:

The setup was quite tedious, but now you can play with your Particle Brush in Uvs Space to paint on your render texture. You can play with the size, rotation, Blend mode, textures to create nice and dynamic texture creation. This can be useful for Blood or Paint Splat, Dynamic damage mask etc…

Unity_rSKDX7fSoe

Depending on the information and precision needed from the texture, you can drastically reduce it’s size by choosing accordingly the Color Format.

For example, if the Color isn’t randomized per-particle but set in the shader, you can use a format with only one channel like R8_Snorm which will be way smaller:

I hope that this topic will be helpful. I will clean up my scene and share it later. Need to go to bed.

Here is a small package with a simple scene and everything properly setup. It should be working for Unity 6 and HDRP. Everything can be done in URP, but the custom pass process would be different.

VFXG_ParticlePainter.unitypackage (1.2 MB)

7 Likes

Incredibly interesting, Thank you for sharing @OrsonFavrel ! :grinning:

If you’ll allow me this curiosity / exploration…

I’m interested in whether there’s an actual need for the second display camera - won’t that be problematic in builds? Is there any warning regarding setting things to a second display but not activating it? Doesn’t this double all render costs globally?

Also, with Render Graph, would it be possible to just have this execute as a single pass in the original camera, and render to some render texture that would be accessible to the VFX graph?

Also, if we want to limit the data in question, is there perhaps a way to simplify the data passed in to vfx graph and retrieved from it? For example, we could create an event output handler and pass out the position and normal of the collision on the vfx graph to C#, and then check it vs our target meshes (assuming we only care about a limited amount of skinned meshes)

Also, am wondering how expensive would it be to try and get the actual triangle the mesh hit, because that could ideally create a situation where you can stick the particle using binding weights for the exact point, and start passing in transforms on a graphics buffer… so its consistent without a full texture per painted object, and only requires however many skinning transforms in buffer size…

For the above, it’s a shame we can’t just pass in transforms as attributes or even a transform array as a property…

As an improvement on the UV coordinates buffer, it’s possible to set the renderer’s UV2/3/4/5/6/7 to a vector3 pre-skinned mesh position I guess, then just print that, so it can work with any moving mesh.

Now I’m tempted to give this a go with the output event handler → going to C# → trying to compare the position vs whitelisted meshes → writing attribute information to the particle that triggered the event…

I’m going to have to experiment with this, too! :sweat_smile:
Thank you for sharing, again!

1 Like

Oh this is also an interesting option! I guess the main reason for the decals would be if you have a customizable character of some sort?

I’m sure the dev of PaintIn3D could use this XD

Morning. So the second display isn’t really necessary. It’s mainly used here so that it’s easy to debug and see all at once. The UV’s pass, the particles, etc. This could be rendered to a small render texture and sampled in VFX Graph almost the same way.

Sadly, I’m not too familiar with Render Graph. I’m still picking up with Unity, having spent most of my game development career with Unreal. So there are still many areas of Unity that I need to discover more about.
When it comes to performances, I would say that it’s often a matter of context: target platform, current scene, project, etc. So I think it’s always best to profile it in your context, scenario, and specific needs.

Usually, we try to avoid round-tripping between GPU and CPU, especially with gameplay, but it can work. This solution has the advantage that VFX graph data comes from either the depth and color buffer of the custom UV pass. Therefore, I might be wrong, but it shouldn’t be more expensive to get the information from a higher-resolution mesh or even several meshes.

Now, regarding accessing the Triangle Index, this was initially the goal that I was going for. I’ve been able to make it work and will explain it in the original thread, but this involves for-loop on the GPU, which isn’t ideal with the parallel nature of the GPU. Putting it briefly, my colleagues didn’t recommend doing this :sweat_smile:

Don’t hesitate to share your experiment :slight_smile:

Well, the decal’s solution is still pretty useful, but as we saw it gets tricky on moving geometry. Now the cool thing with decals is that all the modifying/blending with the base Color/normal is directly done in VFX Graph without the need of customizing the Shader of the targeted geometry.

I still haven’t gotten a fully working solution binding to vertex, I posted what I had but I’m stuck in buffer errors.