[SOLVED] Per-object data, "material" or "sharedMateriel" ?

Hi guys !

I have written a simple surface + vertex shader that simulates tiles of water with waves. From an initially flat mesh, it updates vertices and normals on the fly using a noise function to make it look like agitated water. The actual mesh remains flat from the Unity C# point of view.

For each tile of water, I need to provide the shader with object-specific data. What I have been doing until now is setting shader variables (not properties) from C# as follows :

private Vector4[ ] worldBasePointsAsVector4;

material.SetVectorArray(“worldBasePoints”, worldBasePointsAsVector4);

With that declaration in my shader, which receives the object-specific data :

uniform float4 worldBasePoints[400];

I am storing that data into regular shader variables instead of Unity shader properties because I need my data to be an array, which doesn’t seem to be possible with proeprties.

This per-object data transfer to the shader is only done once, in Awake(). I also make a copy of the material in Awake() so that the values are per-object instead of being shader by all water tiles :

material = GetComponent<MeshRenderer>().material;

It works. However, I’m getting errors saying that materials may be leaked, and it seems that I can get a serious 4x performance increase if I share the material across all water tiles by doing :

material = GetComponent<MeshRenderer>().sharedMaterial;

The problem is that, when I use sharedMaterial instead, the data (i.e the Vector4 array) I pass to my shader becomes the same for all water tiles. It is also very annoying to tune the properties of the shader since each water tile has its own material instance. When I want to set a property, I have to modify the original material itself from the project view and then assign it back to all the existing water tile objects, which is unpractical.

So, my question is : given what I’m trying to achieve, is it a good choice to share the same material across all water tile objects ? If yes, how can I pass object-specific data to the shader despite the material being shared ? If no, how can I remove these “materials will be leaked” errors and what is causing such a performance loss when using a material instance per object ?

Thank you very much ! :smile::smile::smile:

PS : I only started to learn shaders a few days ago so I’m still a big noob with this.

EDIT : I managed to pass per-object data by storing that data into the Mesh.tangents. This way I can both use a shared material and have per-object data. However the look of the water is all messed up now. I’m assuming that Unity makes use of the tangents that I’ve used to store my data.

Using the exact same material allows for batching. Batching means Unity combines multiple meshes into one, and one draw call. But the only unique per “mesh” values you can have with batching need to be stored in the vertex data which is far more limited. Certainly not enough to store 400 unique float4 values.

If you use multiple materials, batching doesn’t work because it has to be multiple meshes at that point, and each unique material has to be another draw call.

You could potentially still use batching by setting one giant array with the values for all of the tiles, then a second array with an index into the array for the start for a particular tile. Then on the mesh store the index for the start position array. You can use additional vertex streams to modify the data in the meshes at runtime with out having to copy them.

Depending on how many points you need, this won’t help that much since arrays in shaders are usually limited to a length of around 1000.

There’s also instancing, which if each tile mesh is exactly the same could let you draw multiple tiles each with unique data in one draw call. To do this you’d have to setup your shader to handle instancing, and then use MaterialProperyBlocks to set unique data on each renderer component rather than modifying the material directly.

However there’s no support for unique array data per instance. This is because instancing works by putting the data into an array, and accessing the array using the instance id as the index, and shaders don’t support arrays of arrays.

So we’re back to having one big array still. To get past the limit on the array variables you can use a structured buffer which allows for up to 65536 floats, or 16384 float4s.

1 Like

Thank you very much for this detailed answer.

My meshes are not all the same. Let’s say there are a few types of meshes. Some are flat, some are curved, etc. And all water tiles of the same “type” have the same shape and thus the same mesh.

My 400 float vectors points were actually the position of the vertices of the mesh in world space, computed once in Awake(). Each water tile is a square of 20x20 vertices. I just wanted to cache these world positions. It seems computing the world position within the shader is rather cheap, despite the fact that it’s a waste to recompute a position that doesn’t change. However, for some reason, the world position returned by mul(unity_ObjectToWorld, v.vertex).xyz seems to return values like all the water objects were at the same position while they are not, so this is weird.
I have other values that I need to pass to the shader, beyond the world space vertex position, but maybe that I can also manage to compute them within the shader so I don’t need to pass them to the shader.

I could use that mega array as you suggest, but since my water objects can be added and removed dynamically (my game has a race track editor) it might be a pain in the ass to manage. Besides it’s very likely that there will be more than 65k vertices, so that would do the trick.

I assume that the only choices I have are :

  • Either recompute all the values I need from the shader without passing them from the Unity C# side, allowing me to keep using a shared material ;
  • Or use multiple materials like I am doing right now, but that would involve a real performance loss.

I’ll dig further into this. Thanks for the course.

Doing everything in the shader is generally best. You want all that water in a single draw.

What makes you think the world position is wrong?

1 Like

Well, for each water tile object I’m transforming its vertices into world space in order to use these positions are input values for a 3D simplex noise function returning a wave height. That wave height is then applied along the water normal at a given point so that the updated vertex (i.e with the wave effect applied on it) is originalMeshVertex + waterNormal * waveHeight.

But for some reason, all meshes have the very same shape (and thus vertices having the same world position) if I recompute the world coordinates of their vertices within the (shared) shader rather than computing them in Unity and passing them to each (per-object) material instance instead. I don’t know why.

The issue you described sounds to me like you’re defining v.vertex as a float3 and not the float4 it should be. That would cause the issue you’re having as it would ignore the object’s position and only take into account the object relative vertex positions.

Use this:

float3 worldPos = mul(unity_ObjectToWorld, float4(v.vertex.xyz, 1.0)).xyz;

Use that line to compute the world position the shader. Unity will always calculate the world position in the vertex shader. It’s part of the UnityObjectToClipPos() function, and shader compilers are good at optimizing repeated code, so it’s free to do that calculation as long as you use the line exactly like above.
I really wish Unity would add a UnityObjectToWorldPos function…

1 Like

Wow it works now ! Thanks a lot ! I didn’t imagine that this 4th field would matter and assumed that it would just be ignored. Thanks again for the help.

EDIT : Sorry to bother you again. If you have more time to waste, would you have an idea why reflections look so weird ? I have set all my normals to float4(0, 1, 0, v.normal.b) (i.e upwards) just to test, and this is how it renders in the attached image. I wasn’t expecting to see proper reflections with these hard-coded normals, but it doesn’t look like I was expecting, i.e like it was a flat surface. Would you have an idea why ? Is it some kind of world / local coordinate issue ? Should the normals be given in world space or something ? Thanks !

I would suggest you read up on matrix math. The TLDR version is that last value is used to say “apply the position offset”, and with it removed* or set to 0.0 the matrix only applies the scale and rotation. Basically this is for directional vectors like normals. It has to be 1.0 for it to apply the position. Unity automatically sets the w component of v.vertex to be 1.0 when sent to the GPU, but it also overrides it to be 1.0 in the UnityObjectToClipPos() function regardless of if you pass a float4 or float3 value to it.

Normal should only have three components, ie: a float3 or half3. Normal maps are different because they’re encoding values with a -1.0 to 1.0 range into a texture with a 0 to 255 (or 0.0 to 1.0 in the shader) range, and are fixed4 values just because of how Unity stores them by default (it has to do with increasing quality with desktop texture compression). I’m not really sure why you’re defining a float4 value, or how it’s being used.

  • Warning: doing a mul(float4x4, float3) only works on some hardware! On Windows, most GPUs will automatically treat that as a direction, and convert it to mul(float3x3, float3). MacOS, and mobile will throw an error instead. Linux … it depends on the drivers they’re using. Most consoles will also error on that.
1 Like

Thanks I’d never had guessed that alone. But you’ll probably tell me that it’s written in a documentation somewhere, there is just a lot to learn.

My normals are actual vertex normals, not a texture used as a normal map. I usually define them as float3/Vector3, but I believed that the shaders required them to be defined using float4, for some optimization / memory alignment reasons or something like that. I don’t know what lead me to think that. I’ll just make them float3 then. Thanks.

EDIT :For some reason I’m still getting these weird reflections even if I leave the normals untouched lol, I don’t know… :frowning:

Nope, not documented. That’s just from reading the shader files and experience.

With out seeing the shader code, I have no idea.

Building on what @bgolus has said, this article: http://catlikecoding.com/unity/tutorials/rendering/part-1/ will explain the matrix math described in a nice easy to follow fashion :slight_smile:

1 Like

Hey, it seems like my previous comment has been blocked for some unknown reason, so I will repost it.

I don’t want you to feel force to have a look at my shader, but if you offer to help i’m not refusing :slight_smile: It has evolved a bit since earlier. I added cube map reflections as suggested in this tutorial : Unity - Manual: Surface Shader examples (I’m using Unity 5.6.3).

(some edits at the end)

For some reason, I still get weird reflections. They seem to stretch and repeat, just like in the image I attached in a previous comment of mine. I’m sure I’m really close to get something that looks cool. There must be a tiny mistake somewhere that messes all up, but I can’t find what.

Here is the latest code of my shader, but it’s totally okay if you don’t have time to help me, you’re not here to code my game for me :

Shader "WaterVertexShader"
{
   Properties
   {
       _Speed ("Speed", Float) = 0.5
       _Scale ("Scale", Float) = 0.5
       _Height ("Height", Float) = 0.2
       _Color("Color", Color) = (0,0,0.5,1)
       _Specular("Specular", Color) = (1,1,1,1)
       _Glossiness ("Smoothness", Range(0,1)) = 0.5
       _Cube ("Cubemap", CUBE) = "" {}
   }
   SubShader
   {
       Tags { "RenderType" = "Transprent" }

       CGPROGRAM

       #pragma target 5.0

       #pragma surface surf StandardSpecular vertex:VSMain //fragment:FSMain
       //#pragma fragment FSMain
       //#pragma multi_compile_fog
    
       #include "UnityCG.cginc"
       #include "Assets/SimplexNoise/noiseSimplex.cginc"

       struct VertexToFragment
       {
           float2 uv : TEXCOORD0;
           UNITY_FOG_COORDS(1)
           float4 vertex : SV_Position;
       };

       sampler2D _MainTex;
       float4 _MainTex_ST;

       uniform half _Glossiness;
       uniform fixed4 _Color;
       uniform fixed3 _Specular;
       uniform samplerCUBE _Cube;
       uniform float _Speed;
       uniform float _Scale;
       uniform float _Height;
       uniform float _NormalSmoothing;

       uniform float4 localDirections[400];

       static const uint xResolution = 20;
       static const uint zResolution = 20;

       struct appdata
       {
           float4 vertex : POSITION;
           float3 normal : NORMAL;
           float4 color : COLOR;
           float4 texcoord : TEXCOORD0;
           float4 texcoord1 : TEXCOORD1;
           float4 texcoord2 : TEXCOORD2;
           uint id : SV_VertexID;
       };


       inline uint IndexFromXZ(uint x, uint z)
       {
           return z * xResolution + x;
       }

       inline uint XFromIndex(uint index)
       {
           return index % xResolution;
       }

       inline uint ZFromIndex(uint index)
       {
           return index / xResolution;
       }

       float4 GetUpdatedVertexRecomputeWorldPosition(float4 localBasePoint, uint index)
       {
           float3 worldBasePoint = mul(unity_ObjectToWorld, localBasePoint).xyz;
           float3 localDirection = localDirections[index];
           float3 time3 = _Time.xyz * _Speed;
           float height = snoise(worldBasePoint * _Scale + time3) * _Height;
           float3 updatedVertex = localBasePoint + localDirection * height;
           return float4(updatedVertex.xyz, localBasePoint.b);
       }

       inline bool IsEdgeVertex(uint x, uint z)
       {
           return x == 0 || x == xResolution - 1 || z == 0 || z == zResolution - 1;
       }

       struct Input
       {
           float2 uv_MainTex;
           float4 vertex : POSITION;
           float4 color : COLOR;
           float3 worldRefl;
       };
 
       VertexToFragment VSMain(inout appdata v, out Input inp)
       {
           UNITY_INITIALIZE_OUTPUT(Input,inp);

           uint index = v.id;

           uint x = XFromIndex(index);
           uint z = ZFromIndex(index);

           float4 currentXZVertex = GetUpdatedVertexRecomputeWorldPosition(v.vertex, index);
           float3 normal;

           if (IsEdgeVertex(x, z))
           {
               normal = float3(0, 1, 0);
           }
           else
           {
               float xStep = 1 / (xResolution - 1);
               float zStep = 1 / (zResolution - 1);

               float4 prevXVertexOriginal = v.vertex + float4(-xStep, 0, 0, 0);
               float4 nextXVertexOriginal = v.vertex + float4(+xStep, 0, 0, 0);
               float4 prevZVertexOriginal = v.vertex + float4(0, 0, -zStep, 0);
               float4 nextZVertexOriginal = v.vertex + float4(0, 0, +zStep, 0);

               float4 prevXVertex = GetUpdatedVertexRecomputeWorldPosition(prevXVertexOriginal, index);
               float4 nextXVertex = GetUpdatedVertexRecomputeWorldPosition(nextXVertexOriginal, index);
               float4 prevZVertex = GetUpdatedVertexRecomputeWorldPosition(prevZVertexOriginal, index);
               float4 nextZVertex = GetUpdatedVertexRecomputeWorldPosition(nextZVertexOriginal, index);

             //normal = normalize(cross(prevXVertex - nextXVertex, prevZVertex - nextZVertex));
               normal = float3(0, 1, 0);
           }

           v.vertex = float4(currentXZVertex.xyz, v.vertex.b);
           v.normal = normal;

           VertexToFragment o;
           o.vertex = UnityObjectToClipPos(v.vertex);
           o.uv = TRANSFORM_TEX(v.texcoord, _MainTex);
           //UNITY_TRANSFER_FOG(o,o.vertex);
           return o;
       }

       // Add instancing support for this shader. You need to check 'Enable Instancing' on materials that use the shader.
       // See https://docs.unity3d.com/Manual/GPUInstancing.html for more information about instancing.
       // #pragma instancing_options assumeuniformscaling
       UNITY_INSTANCING_CBUFFER_START(Props)
           // put more per-instance properties here
       UNITY_INSTANCING_CBUFFER_END

       void surf (Input IN, inout SurfaceOutputStandardSpecular o)
       {
           o.Albedo = _Color.rgb;
           o.Alpha = _Color.a;
           o.Smoothness = _Glossiness;
           o.Specular = _Specular;
           o.Emission = texCUBE(_Cube, IN.worldRefl).rgb * 0.5;
       }

       ENDCG
   }
}

EDIT : If I comment the line where I update the vertex in the inout structure of the vertex shader, i.e : v.vertex = float4(currentXZVertex.xyz, v.vertex.b); then my reflection problem is gone. But then I get a flat surface of course…

EDIT : Ok it seems like I fixed my weird reflection issues by modifying things rather randomly :smile: I used this line : v.vertex = float4(currentXZVertex.xyz, 1); instead of this line v.vertex = float4(currentXZVertex.xyz, v.vertex.b); and it seems to work. Now all I have to do is fix my normals.

ALLELUUUJAAAAAAAAAAAAHHHHHHHHHH !!! ;);):wink:

Thanks for the help guys !!

EDIT : I can now have 2.1M triangles at 100+ FPS and yet it’s still sucking more CPU (9.5 ms) than GPU (4 ms). That’s amazing. If I did this with the CPU it would easily be 10000 times slower at least. But now I’m starting to get curious about why this is using so much CPU since I’m not passing any data to my shaders. I guess it’s due to the large number of meshes in the scene.

1 Like