Best practices for using and managing VFX Graphs across multiple prefabs?

How should I make a structure to use VFX graphs in a game?

for example
this is Demo in VFX Graph EllenSkinnedMeshEffect

Characters can be holographic, burned Desintegrated, or Electrical.
Or they can happen at the same time.

In fact, many of the objects in the game can be in various states like the characters above.
When the player shoots a laser gun, the hit object is holographic.
When the player shoots an electric gun, the hit object is electrocuted.
When a player uses a flamethrower, the hit object burns and disappears.

In the Demo, There is a VFX Graph object,
And it receives the mesh renderer of the scene object(not prefab, hierachy object) as a parameter.

Let’s say there are 10 different kinds of characters(Instantiate prefab) in the Scene.
For this to work, you’ll need 10 groups of __EFFECTS and 10 characters.
Since each character can be in a different state at the same time,
the __EFFECT group will also be needed as many characters.
(each connected in pairs)
8493761--1130216--upload_2022-10-6_21-14-48.png

Here’s the question.
How should i use these?

case1)
All character prefabs must contain all VFX objects
Set all parameters in editor time
Enable and use VFX objects required at runtime

case2)
Instantiate VFX objects required at runtime
Set parameters through code
Destroy when VFX objects are no longer needed

I feel there is a big problem in both cases.
Which method is right?
Or is there another better way?

Hello,

The Skinned Mesh demo is using Visual Effect Component located in prefab hierarchy because it was a simpler way to link the transform of the Skinned Mesh Renderer. It isn’t the case since 2022.2.a14 (see this early preview or twitter), the transform can be directly integrated within the Visual Effect Graph with a system in world space.

This is a valid approach but it isn’t really easy to author and scale in a real world production.

This is also a valid approach, probably easier to author for a production, however, instantiate a game object at runtime isn’t free, it can create allocation spike.
It’s generally advised to use a pool of object early allocated during loading and only enable them when needed (for instance, see the ObjectPool pattern helper).

Can you share what is your concern ? I’m not sure to understand what are you worried about.

1 Like

Thank you for your quick reply!
My concerns are similar to those you mentioned.
Since I don’t have much experience with this yet, I need some advice on very basic and general uses.
let me know if I missed anything

Case 1 has difficulty in real world production as you said.
Whenever I create a new VFX, I must add a prefab to all objects.
(Even with the structure of nested prefab, this seems to have a large overhead.)
I don’t know how much it’s going to affect performance
This will not be positive.
There seems to be a problem that too many scene objects each have dozens of objects for vfx ojects.
It takes too much work to connect the parameters required by the VFX graph from the inspector to the reference for every object prefab.
If there aren’t a lot of objects, that’s fine
But considering the actual game, this would be outrageous.

Case 2 also shares the same problem with creating something at runtime.
There are other parts that I’m worried about.
Is it right to use the following method to create and use VFX at runtime?

  1. Write a code to reference Timeline for VFX and parameters required by VFX
  2. Create a VFX Object at runtime(Instantiate or addressable way… etc)
  3. connect target’s value to empty parameters in the generated VFX Object and Timeline (by code)
  4. All VFXs and objects must be initialized and used through the above process
    I think there will be a lot of codes, but I wonder if this is right.

plus… few more question… platform is a high-end PC/console.

  • Is there no problem if there are hundreds of Timeline play for VFX scenarios at the same time?
  • Do I need a special manager to create/destroy for a VFX?
    Or is it right that the object itself controls the VFX related to itself?

Of course I’m considering that every choice has a trade-off and cost.
But what method is general to use?
I want to know which method is recommended
It doesn’t matter if it’s not the two cases above.
Is there any other way?

Hello,

It will be difficult to provide a simple answer more accurate than “it depends”.

Yes, it’s valid to use this pattern for VFX at runtime, most of project I take a look at are using a project specific object pooling system which fit with gameplay needs.

If your VisualEffectAsset is dedicated to be played with a specific timeline, I would suggest to create a prefab with both object already linked together, it sounds simpler but I’m probably missing a subtlety here.

I don’t know about the Timeline overhead but for sure, running hundreds of VisualEffect in the same time could lead to a really important CPU and/or GPU usage, it depends of their content and the target platform.

It isn’t mandatory, you don’t need a dedicated manager for the VFX in scene but you will probably need a generic manager for object instantiation in general.

Yes, this is how VFX property binders are mostly used.

Exactly, these are trade-off and actually, it can make sense to mix two approaches within the same project: let’s imagine a typical hack n’ slash, the effects used for spells can be managed through a pooling system with recycle pattern in to limit the overhead of per frame allocation but the effects used in cinematic can be created just before this event.

Last and not least, since the discussion is going further than only VFX, I would like to mention it’s also legit to alternate between Shuriken Particle System and Visual Effect Graph depending of the usage and needs (see this page).

1 Like

Thank you for your kind answer.
It was helpful!