Is it better to split VFX graphs into multiple instances?

I cannot find clear answer to my question anywhere.
There are 10 different independent effects triggered by their own 10 events (OnPlay1, OnPlay2…). Is there any reason (like performence difference) to put them all into one large graph instead of 10 graphs in different game objects?

@VladVNeykov Sorry to interrupt you, but today I had debate about VFX organisation in our project and this question came back to us again and it would be very helpful to know the answer.

We want to make many swords in our game (60+) and each of them has it’s own attack combination, like slash down, slash left, stab and other moves. Each move requires some VFX, but because there are different effects it is hard to find common interface to all of them. Currently we use switch node to swap mesh, rotate particle or anything else.

However this pipeline is not efficient, so we want to make graphs with parameters for each move (stab vfx, slash vfx…) and reuse them, but there are two approaches we can take:

  • Create Graph assets for each move, then add desired vfx as separate Visual Effect component, so if sword has 5 attacks, there would be 5 separate gameobjects with visual effects.
  • Create Subgraphs for moves and each weapon gets own graph with desired subgraphs and they are triggered with different events. If sword has 5 attacks, there would be 5 subgraphs inside SwordXYZ graph.

My question is: Is there significant difference in performance, memory, rendering, or anything that could cause trouble in the future?

I don’t know about performance, but what I think I know is problem with vfx sorting:

If you got multiple effects in one graph you can modify Output Render Order.
When using multiple GameObjects sorting will be based on distance from camera for each object.

1 Like

Yes, this might cause some problems, but in my specyfic case it might not.
I see here there is some overhead for having multiple the same instances, so it looks like it is probably better to have one graph with subgraphs inside. The only thing I don’t like that much is that there must be separate graph for almost each weapon, but it’s still better than what we have now.

Hi @Qriva , yup, there’s currently an overhead per VFX components, so the second options sounds more performant (of course benchmarking it for your case would give the best answer).

You mentioned you are using switch operators for meshes (I assume Output Meshes); does that work for you? I think the limit is 32 switch cases, so you probably have to branch between two switch operators for all the 60+ swords?

Do you have other specific parts of your effects (trails, sparks, etc.) which are particularly hard to find a common interface for?

Also it sounds like you have different swords and unique VFX for different animations. How different are the effects for two swords for the same animation?

In single graph there are around 3-6 switch nodes for different properties like mesh, color, lifetime… and there are separate graphs for swords. Actually good to know there is some hard limit, is it set per graph or system?

About ‘common interface’ - I think this is not VFX fault, this is logistics, for example:
there is the same slash animation used in two different weapons, but we might want to use different VFX effect and one of them might require two colors and one texture, while the other one needs only one color, noise texture and noise scale.
If I wanted to make one (or several) large master graphs, then these params would need to be set from the start (not dynamically), if I wanted to make first slash pink and second slash blue, then I must know there are such parameters in C#, but because these params might differ it becomes overcomplicated or messy.

But. Even if I there was the way to do this, I still need multiple outputs, because mesh is used by all of them, so currently if previous slash particle is alive and mesh is swapped, then both of them are rendered in the same way.
Additionaly there is no way to disable output, so if I wanted to use single event then I would need to make switch controlling amount of particles emitted by each system.

Anyway thank you for the answer, I guess second option is the way to do it, but if it would help you in some way, I will attach image of one of our first prototype graphs.

7043752--835123--slash.png

No, just per switch operator.

Thanks! Apologies if you’ve already thought of this, but are you activating the effect through an event via script? If so, lots of the things you are setting can be set directly via script to avoid you having to do the many switches in your graph, something like this (untested code :stuck_out_tongue: ) :

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.VFX;

public class VFXEventAttributeExample : MonoBehaviour
{
    VisualEffect visualEffect;
    VFXEventAttribute eventAttribute;
    private int weaponID;

    static readonly int eventID = Shader.PropertyToID("SwordAttack");
    static readonly int positionID = Shader.PropertyToID("position");
    static readonly int colorID = Shader.PropertyToID("color");

    void Start()
    {
        visualEffect = GetComponent<VisualEffect>();    
        eventAttribute = visualEffect.CreateVFXEventAttribute();
    }

    void StartTrail()
    {
        // Set attributes for each weapon
        switch (weaponID)
        {
            case 0:
                eventAttribute.SetVector3(positionID, new Vector3(0f, 0f, 0f));
                eventAttribute.SetVector3(colorID, new Vector3(1f, 0f, 0f));
                break;

            case 1:
                eventAttribute.SetVector3(positionID, new Vector3(0f, 1f, 0f));
                eventAttribute.SetVector3(colorID, new Vector3(0.5f, 1f, 0.5f));
                break;
        }
   
        // Sends the event with the attributes over to be inherited
        visualEffect.SendEvent(eventID, eventAttribute);
    }
}

Then in your VFX you inherit them without the need for using Switches:

Kinda, but if you just need to toggle between a few outputs and there are not a humongous amount of particles, you can do something like this:

(turning off particles per output, only 1 output will show at a time)

If your output(s) are outputting lots of vertices, you can make this more efficient by enabling Compute Culling in the inspector when selecting the desired output:

I noticed in your screenshot you are setting Age manually in Initialize to 0, probably left from prototyping, but don’t think it’s necessary.

Anywho, not sure if any of this would be useful, but hopefully some food for thought :slight_smile:

7044025--835132--YP2Qs26y8Q.gif
7044025--835150--upload_2021-4-15_18-56-8.png
7044025--835162--4jHyvjbfBw.gif
7044025--835168--upload_2021-4-15_19-3-7.png
7044025--835171--upload_2021-4-15_19-8-6.png

3 Likes

Yes, when we made this graph (I think few months ago) we didn’t know this is possible, but you might see the actual problem here, there must be some place where we store data about angle or lifetime, it can be stored either in graph or C# script, but because event is limited only to common attributes, it made more sense to keep whole logic inside the graph.

This is nice trick! Kind of dumb that I didn’t think about this - previously I would use set size or something.
I need to update koirat’s thread.

If you need it to be asset-driven and easily editable, you can maybe store it in a scriptable object:

using UnityEngine;

[CreateAssetMenu(fileName = "Data", menuName = "ScriptableObjects/SwordData", order = 1)]
public class SwordData : ScriptableObject
{
    public string swordName;
    public Vector3 position;
    public Vector3 color;
}

This will allow you to have data assets in your project folder which will be easy to add/tweak directly in the inspector. And then in your C#

public class VFXEventAttributeExample : MonoBehaviour
{
    VisualEffect visualEffect;
    VFXEventAttribute eventAttribute;
    static readonly int eventID = Shader.PropertyToID("SwordAttack");
    static readonly int positionID = Shader.PropertyToID("position");
    static readonly int colorID = Shader.PropertyToID("color");
    public SwordData[] allSwordData; // <<<<< Link the scriptable objects here

....

    void StartTrail()
    {
        // Set attributes for each weapon
        switch (weaponID)
        {
            case 0:
                eventAttribute.SetVector3(positionID, allSwordData[0].position); // <<<<< And use their data here
                eventAttribute.SetVector3(colorID, allSwordData[0].color);
                break;
            case 1:
                eventAttribute.SetVector3(positionID, allSwordData[1].position);
                eventAttribute.SetVector3(colorID, allSwordData[1].color);
                break;
        }
 
        // Sends the event with the attributes over to be inherited
        visualEffect.SendEvent(eventID, eventAttribute);
    }
}

I think you should be able to send custom attributes as well. The UX is a bit unpolished, but I think you can do something like this:

and then inherit the value:

And then just use it whenever in your graph:

*note that the location needs to be current (“source” is inherit from something - Spawner, GPU Event, etc., “current” is just “whatever the value is now”)

Then you only need to send this custom attribute over with the rest of your attributes:

            case 0:
                eventAttribute.SetVector3(positionID, allSwordData[0].position); // <<<<< And use their data here
                eventAttribute.SetVector3(colorID, allSwordData[0].color);
                eventAttribute.SetVector3("CustomSwordAttribute", allSwordData[0].someCustomValue); // << You can use string, or do the Shader.PorpertyToID like with the other attributes and use an Int for the attribute name
                break;

Even if that doesn’t work, you can just send data and store it somewhere else that you are not using, like texIndex, velocity, targetPosition, etc.

7045681--835453--upload_2021-4-16_7-57-36.png
7045681--835456--upload_2021-4-16_7-58-14.png
7045681--835465--upload_2021-4-16_8-0-6.png

I never used custom attributes, but I think this cannot solve this problem. For sure I can send very base things like direction inside event, but other things are too specific I guess.

This is true, I can store some values in scriptable objects, but the question is why it would be better to make it this way.
I feel obliged now to explain approach I have in my mind :smile:

There two swords with 3 animations (attacks) in combo. I would make two separate graph assets like this:
These are not real graphs, I made them just now for sake of example.
Swords
[spoiler]
7047220--835759--Sword1.png 7047220--835762--Sword2.png
[/spoiler]

They are made from subgraphs representing single VFX like slash or stab and there might be different visual variants of them, like in this example: Simple and Amazing slash. (Let’s assume first one is very basic and another one has sparkles and distortion effect. Btw they are not trails) Also there could be single event and some attack index used in switch to change emission of each system, but it does not matter now.

I could make scriptable object and store Size or Tilt there, because they should be common to all graphs, but other params are very uniqe and I would need SO with literally all possible combinations to do that, furthermore I would need to expose all of them as properties in this graph.
So I think it is better to create graph, set all desired params in subgraphs and send via event some very common properties like character rotation.For sure this will not produce the best possible performance, but it will be maintainable.
Unless I missed something obvious, I don’t know better way to do this.

Thanks for the example, @Qriva !
I see, guess if there’s not much overlap between the different effects, it would be hard to identify and reuse any modular elements (unless you can group them in a few different categories, like OnAttack1 and OnAttack3 in your example can have SimpleSlash be reused with the same properties.) But yes, I don’t think you are missing anything obvious and using the VFX as subgraphs seems like a good workflow.

The only other question would be how many of the 60+ effects will be playing at the same time and whether the performance gain of having fewer Visual Effect Components will outweigh the complications of setting them all in the same graph as subgraphs.

There will be a few in the scene, but I think there will be one playing at the time. These swords are mostly for player and there is only one player luckily :slight_smile:

This is an interesting discussion to follow. It seems like the basic problem is how to give a player numerous weapon types, each with a unique VFX. In shuriken, this would be trivial, just make a new shuriken for each effect. I definitely sense the overhead of a VFX Graph just from using it in the Editor, so I would be wary to include dozens of graphs in my project.

Looking at your graphs, my thought would be to try to design a single graph that is customizable into each affect and use Scriptable Objects that you can name with the unique settings required. I think it’s possible to simplify the variables based on the examples you posted. For example, a gradient instead of 2 colors. Just sample parts of the gradient as needed and if it’s one color, make it a one color gradient. Maybe the same with textures, use a blank texture for an effect that doesn’t need it.

I mean, definitely it would be possible to build SO system for properties, but I can see potential human mistakes, bugs and in this setup it would not be nice to use I guess.

I need to test something first, but if the only overhead is caused by lack of “batching” between graphs, then it might make sense to actually split graphs into separate visual effect components in my case.

Jumping onboard, trying to batch a bunch of smokes to drop the vfx graph overhead, can’t get more than one event at a time in 2020.1.17 / urp 8.31
here is my setup

using System;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class VFXEventEmitter : MonoBehaviour
{
    public int groupID;
    public float size = .5f, lifetime = .5f;
    public static Dictionary<int, List<VFXEventEmitter>> emitters = new Dictionary<int, List<VFXEventEmitter>>();

    void OnEnable()
    {
        if (emitters.ContainsKey(groupID)==false)
            emitters.Add(groupID, new List<VFXEventEmitter>());
        emitters[groupID].Add(this);
    }

    void OnDisable() { emitters[groupID].Remove(this); }
}
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.VFX;
public class VFXEventController : MonoBehaviour
{
    VisualEffect visualEffect;
    VFXEventAttribute eventAttribute;
    public int groupID;
    static readonly int eventID = Shader.PropertyToID("smoke");
    static readonly int positionID = Shader.PropertyToID("position");
    static readonly int lifetime = Shader.PropertyToID("lifetime");

    void Start()
    {
        visualEffect = GetComponent<VisualEffect>();   
        eventAttribute = visualEffect.CreateVFXEventAttribute();
    }
    void Update()
    {
        foreach (var e in VFXEventEmitter.emitters[groupID])
        {
            eventAttribute.SetVector3(positionID, e.transform.position);
            eventAttribute.SetFloat(lifetime, e.lifetime);
            visualEffect.SendEvent(eventID, eventAttribute);
        }
    }
}

result = only one vfx event emitter emits, always the last one in the event list
expected = all vfx event emitters emit

This is not a bug, its a feature :wink:

Single VFX can process only one event call per frame , that’s the one of reasons why I made this thread.
I think someone said it will be changed in future, but I have no clue about current state and if there is still plan to change this behaviour.

1 Like

vlad said 21 got that fixed
did you end up staggering particle emit? not hot when the game drops below 60Hz, suddenly a bunch of emitters stop emitting

1 Like

Oh, wait. It’s fixed in 2021? Do I understand it correctly? Link or didn’t happen :hushed:
Anyway I am not sure to understand what you mean. Btw, why you don’t upgrade to 2020.3?

It will be for 2021.2, the PR is ready and awaiting to be merged.

Here’s the PR in question (I think this repo is public and you should be able to view it, but I might be wrong, in which case you’ll just have to trust me :smile:). To avoid confusion, again this is slated to go in 2021.2.

1 Like

1 frame spawns the particles of one emitter, the other frame those of another one etc…

because of this https://discussions.unity.com/t/832862