Hi guys, today I finished my sprite sheet system using a custom shader, compute buffers and DrawMeshInstancedIndirect, to FAST render, a lot of object with a single draw call.
I would like to know what approach are you using, to share some knowledge about this topic since there is no “official” implementation of a sprite sheet system with ECS yet.
I’ll post my source code later this afternoon
Looks good. What is missing is integrating into the conversion pipeline.
Check out MeshRendererConversion.cs for an example. This way you could just have SpriteRenderer’s as game object for editing and everything gets converted to optimised baked format for loading in the game.
Looks pretty nice.
I wonder if it is possible to use a similar approach to the one taken by SpriteRenderer where with a given atlas it can (apparently) render multiple meshes with just one draw call.
Not yet, but you can achieve the same result using a different approach:
Given this sprite atlas, for example, you can set the SpriteSheet component as following
new SpriteSheet { spriteIndex = 1, cell = new int2(9, 5) }
and just by changing the spriteIndex you can select any sprite you want.
The limit of this system is that you need sprites with the same size so that you can divide them into equal cell size.
I’ll be soon implementing a system that supports actual atlases where sprites don’t always have the same size.
I’m doing a similar approach but using DrawMeshInstanced witth MaterialPropertyBlocks and a custom shader with instanced properties (eg. color, tile, offset) and achieving the same as your DrawMeshInstancedIndirect/ComputeBuffer approach.
The performance is not as good as using DrawMeshInstancedIndirect because of the imposed 1023 instance limit on DrawMeshInstanced.
That is pretty straight forward. Have a look at MeshRendererConversion inside Unity.Rendering.Hybrid package as it should be similar in some ways.
Yeah, I tried every approach possible… the only one I didn’t try was geometry shaders.
And out of anything I tried, DrawMeshInstancedIndirect was the fastest.
You could probably speed up even a bit more by moving the culling to a compute shader.
Add all the instances to a compute buffer as you do. Then use a compute shader to frustum cull and add the visible instances to an append buffer you render with DrawMeshInstancedIndirect.
This way it would not need any data copied from CPU to GPU every frame.
You are right, I already knew that my occlusion culling system was not perfect, but it’s not even taking 1% to run, so I didn’t complicate thing right at the beginning.
I’ll be surely editing this part in a new feature.
I already wrote a sorting effect based on the lower Y, it’s gonna be available in the next version(rotations are also supported).
To supply additional data to the shader you need to create a ComputeBuffer just like I do, and then communicate the values to the linked variable inside the shader.
Another suggestion would be to use Unity builtin SpriteAtlas instead of your own.
In my renderer i do it by forcing the atlas to use FulRect sprites then via instancedProperty/ComputeBuffer you can set the offset/tiling and scale. Works just fine.
Little update: pushed a new version with z-sorting and rotation support!
I am currently working on a system to calculate tiling and offset without having equally-sized sprites
Another Update: I rewrote the UV system and now is way easier to configure a sprite sheet.
All you need to do is a material with a texture and if that texture is a sprite with SpriteMode: multiple, it will automatically bake the uvs at the beginning and access them with a dynamic buffer inside the RenderDataSystem.
EntityManager entityManager = World.Active.EntityManager;
spriteSheetArchetype = entityManager.CreateArchetype(
typeof(Position2D),
typeof(Rotation2D),
typeof(Scale),
typeof(Bound2D),
typeof(SpriteSheet),
typeof(SpriteSheetAnimation),
typeof(SpriteSheetMaterial),
typeof(UvBuffer)
);
NativeArray<Entity> entities = new NativeArray<Entity>(200000, Allocator.Temp);
entityManager.CreateEntity(spriteSheetArchetype, entities);
float2[] cameraBound = Bound2DExtension.BoundValuesFromCamera(Camera.main);
float4[] uvs = SpriteSheetCache.BakeUv(material);
for(int i = 0; i < entities.Length; i++) {
float2 position = cameraBound[0] + new float2(UnityEngine.Random.Range(-cameraBound[1].x / 2, cameraBound[1].x / 2), UnityEngine.Random.Range(-cameraBound[1].y / 2, cameraBound[1].y / 2));
entityManager.SetComponentData(entities[i], new Position2D { Value = position });
entityManager.SetComponentData(entities[i], new Scale { Value = 1 });
entityManager.SetComponentData(entities[i], new SpriteSheet { spriteIndex = UnityEngine.Random.Range(0, 16), maxSprites = uvs.Length });
entityManager.SetComponentData(entities[i], new SpriteSheetAnimation { play = true, repetition = SpriteSheetAnimation.RepetitionType.Loop, samples = 10 });
entityManager.SetSharedComponentData(entities[i], new SpriteSheetMaterial { material = material });
var lookup = entityManager.GetBuffer<UvBuffer>(entities[i]);
for(int j = 0; j < uvs.Length; j++)
lookup.Add(new UvBuffer { uv = uvs[j] });
}
entities.Dispose();
Yeah I decided to support only one axis to make everything fit inside a float4: positionx, positiony, rotationAngle, scale
I might change it later, I think!
Edit:
In the Bound2D the scale is a float2 because I test the Intersection between camera/entity.
Even if an entity has always the same scale XY, the camera has different scale XY.