Need help with optimizing simulating a basic crowd movement

I’m trying to simulate basic crowd movement and could use some help with optimization.

I have several game objects as crowds and want them to just move up or down randonmly.

I don’t want to use animations as they take up too much resource with performance. As it’s a simple movement, doing the movement via code seems ideal.

I’m using the following code to try and replicate my movement.

    private float yPos;
    private float speed = 1f;

    private bool goingUp = false;
    private bool startMovement = false;

    void Start()
    {
        yPos = transform.localPosition.y;

        StartCoroutine(StartingMovement());
    }

    IEnumerator StartingMovement()
    {
        yield return new WaitForSeconds(Random.Range(0.5f, 4));

        startMovement = true;
    }

    private void Update()
    {
        if (startMovement == true)
        {
            if (goingUp)
            {
                transform.position += Vector3.up * speed * Time.deltaTime;

                if (transform.position.y > yPos + 0.4f)
                {
                    goingUp = false;
                }
            }
            else
            {
                transform.position += Vector3.down * speed * Time.deltaTime;

                if (transform.position.y < yPos)
                {
                    goingUp = true;
                }
            }
        }
    }

This does work in the sense that each object does move up and down randomly, however this doesn’t seem very optimized as this script is attached to each object to order to do this. As a result, I get significant delays when switching scenes.

Is there a more optimal/better way of going about this, where i can get multiple child objects to move randomly?

I did think of just using a script of the parent and access the child objects, but i can’t seem to make them move individually.

I appreciate that this may not be the best way to go about it so any input would be helpful.

Test project: Crowd Test (1).zip - Google Drive

Animations would cost too much performance? Which tests did you do to conclude that? What platform are you developing for - a GameBoy Color? Or how many entities are we talking about in these groups?

You could indeed gather your grouped entities and parent them to the same object, then attach your script to that same object. Im confused at to why you say you want to make the entities move individually tho… isnt the whole point of your groups that they should move up or down together? If you want individual movement, you need individual scripts (or technically you could do it in one script, but you’d still need to handle each entity in a similar fashion).
If you want some enemies to move as groups, you could parent just those to the corresponding parent. You could also just have an empty parent object with that script attached per movement direction (out of sync) and attach the desired entities to it dynamically. It is hard to tell you what to do since we have so little information about what you are actually doing. This could be anything from group behavior for enemies, to idle animations, to background animations, … and talking about animations, im curious as to how you concluded the whole performance issue part.

3 Likes

For changing many transforms you can use the job system. It has a job type that supports transform

1 Like

Right so, in my project i have around 600-700 child objects. I want to replicate a crowd in a stadium, which is why i want them to move differently from each other.

The test project i’ve provided is basically what i’ve done, just with much less gameobjects but same setup. Not sure what other information you require as i’ve mentioned in the initial post that its several gameobjects that each have a script attached to them with the code provided. Like i said, i have managed to get this working, but there is a slowdown when changing scenes. I was asking whether there is a more optimized why of going about this.

In terms of your fixation on animation performance, i’ve concluded that there is a performance drop as my projects runs at a lower framefrate when using animations compared using scripting. I attached all gameobjects to a parent and added an animator components to the parent to animate the game objects. I’ve tested this in editor and build to Android device.

Honestly for such a simple use case (moving an object up/down randomly), it’s better to write a vertex shader.

But if you really want to stick with this approach, you can make a simple manager that governs over the crowd and decides when and which instances should change state. That way you have perfect control, and you also have just one MonoBehaviour, not 700 of them. Because that’s your problem, you’re just wasting too much time doing nothing, and you’re wasting the underlying system resources, requiring 700 objects to be considered and called constantly.

Instead you just build a time machine that can decide who is supposed to stand up / sit down in some randomly decided intervals.

You can attach the crowd to this object as children, so it has an easy time deciding which objects are the target ones. And all it does is to re-enable the already existing animation scripts which then auto-disable themselves. That way you ensure that you have a dozen or so active MonoBehaviours at most.

3 Likes

Update: Definitely pay attention to @Yoreki before jumping neck deep into acting on what I’ve written below. Yes, it could be faster, but does it need to be? If the Profiler shows you that you need to optimise this then the below should help you pick a direction. If you don’t need to then just tuck it up your sleeve for later, of for when you want to do some exercises to learn about this stuff.

Original content:

Yep, or any other approach which doesn’t involve doing things per object.

The current approach has overheads coming from a few directions, any one of which could be of higher impact than the actual work you want done, @gw707 , which is just changing a few numbers.

  • Every crowd member is a GameObject with a Transform and that behaviour attached. So to do anything with it via the current approach, at least three different objects need to be accessed. Accessing objects is usually negligible enough to ignore, and usually I would. The catch is that in this case it’s being done for 700 similar objects every frame for a very simple effect.

  • On top of the above, every time a Transform is modified, Unity has to do a bunch of stuff with parent and child Transforms.

  • Also of note, accessing transform.position requires calculating a bunch of stuff which is dependent on different objects. Here you’re reading it (which internally requires a calculation to world space, which requires accessing parent Transforms), calculating a new value, writing that value to the transform (which internally has to convert it back from world space), then reading it back in a later statement. Calculate the new value locally, then re-use that!

None of the above is having a whinge. None of that stuff is at all obvious until you’re pretty well versed in how game engines work. But hopefully now that you’re aware you can think about how to work around them.

The vertex shader approach is a good one. The GPU has to execute a vertex shader anyway, so putting a few instructions in there can essentially make it free. Even if it’s not free, it will be trivially cheap compared to modifying hundreds of GOs.

Another approach potentially worth considering is using a particle system. They can spawn and render meshes, without those meshes being GameObjects, and with internal data structures which are far more efficient for handling a large set of similar objects.

Another potential option is DrawMeshInstanced and/or DrawMeshInstancedIndirect, for similar reasons.

Regardless of approach, another thing I’d avoid is the “goingUp” boolean and branch. My logic for getting the up/down movement would probably use something like sin(time). This way you don’t need to track whether or not each crowd member is going up or down at the moment and execute different logic, because sin(…) naturally gives you a number which goes up and down. This reduces the complexity of your code, cuts out a branch (probably not a big deal in this case), and cuts out having to write back to the state value (which will matter more or less based on which approach you take, e.g. it would be a giant pain in a shader). You can add to or multiply either the time value or the output from the sin function, or both, and check out other math functions to try out to tweak the movement as desired.

1 Like

I agree with the above posts. If it’s a visual change and performance is of importance, then a shader is usually the optimal solution. If you need the actual entities to move, the job system can easily handle that many. Neither of these topics are especially beginner friendly however, and unless absolutely necessary i would suggest other solutions. Which is why i was asking about the circumstances. Ie, how many are we talking about, what is the target platform and so on. I got to admit, i totally missed the project link. However, i would much prefer a simple sentence like “i have xyz many objects that represent a stadium crowd for which i want to implement crowd movement”, compared to me having to download and load an entire project just to get that information. I dont even have a current Unity version set up on this computer :wink:

Other than that @orionsyndrome and @angrypenguin mentioned a lot of good approaches.
Just make sure that you actually use the profiler to determine a performance problem. Dont just optimize “because”. Thats a typical beginner mistake and will usually lead to a lot of effort being put into something that doesnt necessarily improve the actual build performance. I’m just mentioning this since you wrote you had a “lower” framerate with animations. That doesnt really tell us a lot. There is a difference between a problematically low framerate and a “lower” framerate. If one of them ran with 200 FPS and the other with 190 FPS, for example, that wouldnt be a reason to decide for one or the other. Probably not what you meant, but still, check the profiler to determine actual problems before changing your approach for performance reasons. Maintainability and readability of your project are usually a lot more important than small differences in performance. (Not implying what you measured was a small difference, just general purpose advice)

1 Like

Thank you all for the suggestions, I’m still relatively a beginner so these responses have been really helpful.

The vertex shader approach is an interesting one as i was unaware that you can apply movement via a shader. I’ve have managed to implement this after researching online, albeit they’re moving exactly the same right now. I’m sure with more research and tweaks, i can get it working the way i want.

Thanks again for the suggestions.

2 Likes

You want to make a shader that has some sort of a floating point “seed” as a parameter, as well as “time” value. (You can optionally add “multiplier” as well so you can decide to mute or amplify the motion from the outside.)

Learn online how to produce a quasi-pseudo-random value from the seed (by jumbling sine and cosine functions with arbitrary large values). Internally use this noise value and the time to determine the behavior.

Now to distribute this over many instances, you can either user material property blocks in the built-in renderer, or if you’re in URP, check out this thread . Basically, SRPs batch per material, and alternatively you can use gpu instancing as well.

Make sure you don’t have as many materials as there are objects. You can make much less materials (from code) and assign them randomly, and if you maintain a healthy mixing ratio, nobody will notice.

Regarding “time” property, maybe there is something that comes with the shader already, I can’t remember. Edit: yes ShaderGraph has a time node.

1 Like

So i’ve managed to get this movement working through vertex shader and material property block and works great, so thanks again for that suggestion.

I have, however, run into an issue with this movement and haven’t been able to find anything to solve this.

So in order to create the up down movement, i’m using a sine wave in my shader.

void vert(inout appdata_full v)
{               
    v.vertex.y += sin(_Time * 60) * .13;
}

(This is just an example code, my actual code uses random values rather set ones to simulate randmoness)

While this movement works, when the object goes down it’s going through the floor, which i don’t want. I want the bottom point of the sine wave to be the ground, rather then below it.

  • The left image is how the movement is, and the right is how i want it.

While i can just adjust the y axis of the object and set it high enough so that the bottom point is the ground - this isn’t ideal as if the object were to stop moving, they would be in mid air rather than on the ground. (Hope that makes sense)

I know why it’s doing that as the sine wave is starting from 0 (the initial position of the object), which is at the middle of the wave.

What i can’t figure out is how to find a way to get the movement like the picture on the right without adjusting the objects initial Y axis.

Is there something i can add to the current code or use something other than sine wave to achieve this?

Well you ought to know the delta between the ground and the object’s original pivot point (before any motion).


On the left is what you have, that means from some pivot point you compute a sine wave that is magnified by some amplitude (A), then you apply this as an offset. So you’re probably doing this

pivot = amp * sin(angle);

Instead introduce some pivot offset (an amount shown as X, which should normally be a negative value), and we can use this to push the pivot down by that amount. Take the absolute value of the sine and magnify that by A-X (don’t forget that A is positive, and X is negative, so if A was 8 and X was -3, this will be +11).

pivot = (amp - offset) * abs(sin(angle)) + offset;

Also by taking the absolute value you produce a classic jumping motion, instead of waving up and down.

If you want the waving behavior, however, then you need to push pivot up by a half-amount of offset (and your amplitude is half that of A-X).

pivot = ((amp - offset) * -cos(2.0 * angle) + (amp + offset)) / 2.0;

In this case, it is better to replace sin with -cos, to make the pivot start from the ground.

Finally, to make your shader come to rest, introduce an attenuating parameter (you can call this ‘Strength’). This parameter is in 0…1 range and you can use this to lerp the output amplitude. Simply add the following line right below.

pivot = lerp(offset, pivot, att); // or you can inject the above formula instead of pivot here

If your object’s pivot is already on the ground, then X is zero (and you don’t need the offset parameter) and formulas simplify to

pivot = amp * abs(sin(angle));

or

pivot = (amp * -cos(2.0 * angle)) / 2.0;

Edits:
I forgot to offset the whole thing for the first case
I’ve fixed -cos version and verified this with desmos (<< link should work)
Here you can also see what happens with the lerp added (change t parameter)
Doubled frequency of cos to synchronize it with the jumping one (not shown in desmos)

2 Likes

Ah cheers, thanks for the info.

I’ve managed to use this and get movement i was look for.

Thanks again for your help, much appreciated.

2 Likes

How far away is this crowd? I’ve seen a few a games where the crowd was just a repeating texture with a few frames of animation (maybe 4). That’s because the crows was so far in background and never the focus of the scene.

I had considered this but decided this option wouldn’t have been viable as my crowds are initialy fairly close to the player and I have a camera attached to the player, so you can get closer to the crowds if you move towards them.

There are a couple more techniques you could also try out:
a) you can LOD the models themselves, letting you zoom in on the crowd with automatic scaling of the triangle budget
b) you can introduce submeshes or vertex-painted groups to supply individual (rigid) animation frames to a single mesh if needed (you basically selectively filter the unwanted overlapped meshes out of the render), and this can work miraculously when combined with the simple motion you already did

Both techniques are supported by shaders.
There is also a way to combine both (a) (b) and the vertex shader motion through vertex-color-driven animation (or even a texture), but that’s probably an overkill for your case. More info here and here.

1 Like