This project is a simple example of how Unity’s new Entity Component System can be used to create a performant instanced sprite renderer.
How it Works
By adding SpriteInstanceRenderer to an entity it is rendered using its Position2D and Heading2D as a quad with a texture on it. The SpriteInstanceRender inherits ISharedComponentData meaning any entity using same instance of will be drawn in one draw call. This is possible because of Graphics.DrawMeshInstanced method. In the Example Scene included, 10,000 sprites are drawn. However the before mentioned method only draws a maximum of 1023 instances at once, so it splits up into as many groups necessary to draw all the instances.
If you don’t need a Mesh, you can go with DrawTexture :
using System.Collections.Generic;
using Unity.Entities;
using Unity.Mathematics;
using Unity.Transforms2D;
using UnityEngine;
using UnityEngine.Experimental.PlayerLoop;
namespace Playtest.Rendering
{
[ExecuteInEditMode]
public class SpriteInstanceRendererSystem : ComponentSystem
{
List<SpriteInstanceRenderer> m_CacheduniqueRendererTypes = new List<SpriteInstanceRenderer>(10);
ComponentGroup m_InstanceRendererGroup;
protected override void OnCreateManager(int capacity)
{
m_InstanceRendererGroup = GetComponentGroup(ComponentType.Create<SpriteInstanceRenderer>(), ComponentType.Create<Position2D>());
}
protected override void OnUpdate()
{
Camera.onPostRender = null;
EntityManager.GetAllUniqueSharedComponentDatas(m_CacheduniqueRendererTypes);
Camera.onPostRender += (Camera camera) =>
{
GL.PushMatrix();
GL.LoadPixelMatrix(0, Screen.width, 0, Screen.height);
};
for (int i = 0; i != m_CacheduniqueRendererTypes.Count; i++)
{
var renderer = m_CacheduniqueRendererTypes[i];
m_InstanceRendererGroup.SetFilter(renderer);
var positions = m_InstanceRendererGroup.GetComponentDataArray<Position2D>();
for (int j = 0; j != positions.Length; j++)
{
float2 position = positions[j].Value;
Camera.onPostRender += (Camera camera) =>
{
Graphics.DrawTexture(
new Rect(position.x,
position.y + renderer.sprite.height,
renderer.sprite.width,
-renderer.sprite.height),
renderer.sprite,
renderer.material);
};
}
}
Camera.onPostRender += (Camera camera) =>
{
GL.PopMatrix();
};
m_CacheduniqueRendererTypes.Clear();
}
}
}
using System;
using Unity.Entities;
using UnityEngine;
namespace Playtest.Rendering
{
[Serializable]
public struct SpriteInstanceRenderer : ISharedComponentData
{
public Texture2D sprite;
public Material material;
}
public class SpriteInstanceRendererComponent : SharedComponentDataWrapper<SpriteInstanceRenderer> { }
}
Thank you both for sharing. These kinds of examples are always very helpful.
@Djayp would it be very easy to use DrawTexture with 3D positions? Just wondering how you might calculate the scale based on distance from the camera and if it would be better to just use a mesh when dealing with 3D instead. I’d like to make a custom particle system using the new ECS & job system, since it should be faster and allow for more advanced features like collision detection between particles etc.
I didn’t test it btw, and you MUST adjust myHorizon to your needs, using position.z as a factor.
Be aware my sample code is pretty trivial. I think we could use SetPixels to merge textures according to their z-order and materials to reduce draw calls.
I’m a little confused by Graphics.DrawTexture now. At first I thought it must work just like a GUITexture that is drawn directly to the screen, but it’s actually rendered in world space, even though it uses screen coordinates!? So there’s no billboarding effect in 3D space. You actually need to use GUI.DrawTexture for direct screen drawing, but that’s really slow. So I don’t exactly understand the point of this? If you could actually set the 3D position and rotation it would make a lot more sense to me…
In order to do transformations on Graphics.DrawTexture you have to use the GL functions in Unity
private void Start()
{
Camera.onPostRender += PostRender;
}
private void PostRender(Camera camera)
{
// Pushes the current matrix onto the stack so that can be restored later
GL.PushMatrix();
// Loads a new Projection Matrix, you can also use other methods like LoadOrtho() or GL.LoadPixelMatrix()
GL.LoadProjectionMatrix(Matrix4x4.Perspective(90, camera.aspect, -10f, 10f));
// You can also multiply the current matrix in order to do things like translation, rotation and scaling
// Here I'm rotating and scaling up the current Matrix
GL.MultMatrix(Matrix4x4.TRS(Vector3.zero, Quaternion.Euler(0, 0, 45), new Vector3(2, 2)));
// Draws your texture onto the screen using the matrix you just loaded in
Graphics.DrawTexture(new Rect(0, 0, 1, 1), Texture);
// Pops the matrix that was just loaded, restoring the old matrix
GL.PopMatrix();
}
That aren’t almost world space coordinates, that are world space coordinates (0,0) is (0,0,0) and (1,1) is (1,1,0).
OnPostRender is ment to manually draw arbitrary things. It isn’t specifically ment to render in screen space. You have to setup a matrix manually. Something like this:
LoadPixelMatrix takes 4 parameters: right, left, bottom, top which let you specify any orthographic mapping you want
GL.LoadPixelMatrix(0, 1, 0, 1); // would equal GL.LoadOrtho();
GL.LoadPixelMatrix(0, 1, 1, 0); // same as above but y reversed so 0,0 is top left edit
Just in case you want to render something in the local space of another object, you have to do this:
public Texture2D myTexture;
public Transform someObject; // use this object's localspace
private void OnPostRender()
{
GL.PushMatrix();
GL.LoadProjectionMatrix(camera.projectionMatrix);
GL.modelview = camera.worldToCameraMatrix * someObject.localToWorldMatrix;
Graphics.DrawTexture(new Rect(0, 0, 10, 10), myTexture);
GL.PopMatrix();
}
So we setup the same projection matrix the camera uses to draw the scene and as modelview matrix we set the usual MV matrix (model and view matrix combined in right to left order). Everything you render now will apprear in local space of “someObject”.
I have found many downsides using Graphics.DrawTexture though! For one you can’t see it in the Scene view, Which in my use cases is very annoying. Another thing is that whenever I have used it, It is a lot less efficient than just doing Graphics.DrawMesh (It seems to be calling GUITexture.Draw in the profiler). Another thing is that it has to be done OnPostRender which is pretty limiting.
I would suggest just using Graphics.DrawMesh, and Graphics.DrawMeshInstanced
@Rennan24 Thanks for the info. This would seem to answer my question about using it for 3D then. No point in doing all that extra work to calculate scaling if it’s not even faster. Seems to be optimized for screen space scaling anyway, so I’d kind of be fighting it’s whole purpose. My goal is really just to find the fastest way to draw many billboarded sprites in 3D with as few batches as possible.
Currently there is no way to draw more than 1023 sprites per batch when using Graphics.DrawInstance, and you have to use Matrix4x4[ ] instead of NativeArray so you can’t jobify it until Unity updates their API. If I were you it might be worth trying to use the Unity particle system to spawn particles in manually through code and manipulate them there, especially since they support billboarding, and they can be done in 3D.
Yeah, I was hoping that DrawTexture() would be faster than DrawMesh(), sadly that’s not the case
Hi,
first thanks for the great asset! I used in in one of my projects.
But I noticed that going with the regular MeshInstanceRenderer that has a material with the “Sprite Instanced” shader yields very similar results.
Things I noticed: I can set the rotation with the MeshInstanceRenderer but not with your InstancedSpriteRenderer
With you InstancedSpriteRenderer I have the posibility to adjust scale and pivot of the texture to be drawn. (although I think all pivots remain at (0.5, 0.5) for my game)
I have a question: There are these requirements for my game and I would like to know if it is possible to achieve them with either your component system or with the MeshInstanceRenderer.
If not, I have to use regular SpriteRenderers again.
So in my game I want to:
set position of a sprite
set the rotation of a sprite
set the scale of a sprite, uniform - one value for both axes
set the color of my sprite.
One way I tried to do it was to render with batches of 1 and change the mesh for the size and the material for the color each time.
As you can imagine, performance was abysmal.
So do you have a hint how to go about this?
I might have to resort to regular gameObjects for rendering again.
The GitHub Repository up top already utilizes Position and Rotation with Position2D and Heading2D components, you can also easily add a float for the rotation angle and then rotate the TransformMatrix in the RenderSystem.
I’ve quickly added scaling and coloring support by just making a few modifications. I’m also just using a built-in shader.
If you want individual scaling and coloring you can’t use SharedComponentData like in the example. You could also add Color and Scale components that you then utilize in the RenderSystem instead of it all being in the SpriteInstanceRenderer. But you won’t be able to take advantage of some of the batch rendering.
I’m mostly just playing around with it to learn the Unity ECS myself.
The following code just shows the areas with relevant modifications:
SpriteInstanceRendererComponent.cs:
[Serializable]
public struct SpriteInstanceRenderer : ISharedComponentData
{
public Texture2D sprite;
public int pixelsPerUnit;
public float2 pivot;
public Color color;
public float uniformScale;
public SpriteInstanceRenderer(Texture2D sprite, int pixelsPerUnit, float2 pivot, Color color, float uniformScale)
{
this.sprite = sprite;
this.pixelsPerUnit = pixelsPerUnit;
this.pivot = pivot;
this.color = color;
this.uniformScale = uniformScale;
}
}
SpriteInstanceRenderSystem.cs:
Mesh mesh;
Material material;
var size = math.max(renderer.sprite.width, renderer.sprite.height) / (float) renderer.pixelsPerUnit * renderer.uniformScale;
float2 meshPivot = renderer.pivot * size;
if (!meshCache.TryGetValue(renderer, out mesh))
{
mesh = MeshUtils.GenerateQuad(size, meshPivot);
meshCache.Add(renderer, mesh);
}
if (!materialCache.TryGetValue(renderer, out material))
{
material = new Material(Shader.Find("Legacy Shaders/Transparent/Diffuse"))
{
enableInstancing = true,
mainTexture = renderer.sprite
color = renderer.color;
};
materialCache.Add(renderer, material);
}
SpriteRendererSceneBootstrap.cs:
var renderers = new[]
{
new SpriteInstanceRenderer(animalSprites[0], animalSprites[0].width, new float2(0.5f, 0.5f), Color.white, 1),
new SpriteInstanceRenderer(animalSprites[1], animalSprites[1].width, new float2(0.5f, 0.5f), Color.cyan, 0.5f),
new SpriteInstanceRenderer(animalSprites[2], animalSprites[2].width, new float2(0.5f, 0.5f), Color.red, 2),
};
Hey that’s awesome, thanks for your solution!
I already figured out a way to adjust scaling in my game. There is a thread I found about it here:
The solutions for the color will only work for the same InstanceRenderer and material I guess? I think there is no possibility to have an individual color for each element (as that would make it impossible to batch the draw calls).
Yes, I’m redoing the whole thing with a separate UniformScaleComponent and a ColorComponent. At least on the 10000 sprites scale it gets extremely slow giving each sprite its own color. I’m still playing around with optimizations though.
Glad to hear that!
I tried it with rendering the meshes one by one, but performance was abysmal, even better to use 1000 gameObjects with sprite renderers in this case, at least from what I’ve experienced.
I think I simply accept that I can only change the color for the whole Renderer, but that’s not that big of a deal, but it would have opened certain possibilities like using color change to visualizing a damage effect for example.
I didn’t know much about Material Property Blocks, but the API reference says this:
MaterialPropertyBlock is used by Graphics.DrawMesh and Renderer.SetPropertyBlock. Use it in situations where you want to draw multiple objects with the same material, but slightly different properties. For example, if you want to slightly change the color of each mesh drawn. Changing the render state is not supported.
So, in the current design, in order to have material property blocks utilized is to partially rework the InstancedMeshRenderer, and pass the prop block to the draw call. The caveat: is that basically breaks MOST of the batching that is done, so while it is neat for a few objects an environment with 100k+ cubes running at 120+fps dropped down to about 20 fps pretty quickly. They are still working on the graphics portion at the moment so it almost seems too hacky (at least to me) to try to design work arounds.
Why using prop block break the batch? Is it a bug? Because I thought the point of it is to have variations in the same material without breaking the draw call. (Not using it then break the batch since every little color adjustment on the material would make a new material)
In the case of the ECS, the MeshInstancedRenderSystem batches the transform matrices and combines them into an array. You then have the OPTION to pass in a material property block but for that array set. So if you consider again 100k cubes all having different transforms, this call handles it no problem, but if you then say have 100k cubes with different property blocks, you would then need to call this method PER cube rather than per batch of cubes
Again, you CAN hack a solution in at the moment to get prototyping working; however, at a major loss of performance. One consideration is to then group prop blocks into batches; however, then consider if 100k cubes all had varying colors by just one of the 4 vector components. Then it breaks down again.