How can I save material renderqueue permanently?

We’re having a lot of headaches due to Unitys strange (broken?) transparent object rendering order.

Luckily we’re able to set the rendering order manually using Material.renderQueue. The problem with this technique is that the value will not be saved in the material.̶ ̶̶U̶n̶i̶t̶y̶̶ ̶t̶h̶i̶n̶k̶s̶ ̶t̶h̶e̶ ̶m̶a̶t̶e̶r̶i̶a̶l̶s̶ h̶a̶v̶e̶ ̶c̶h̶a̶n̶g̶e̶d̶,̶ ̶s̶o̶ ̶i̶t̶ ̶c̶r̶e̶a̶t̶e̶s̶ ̶n̶e̶w̶ ̶v̶e̶r̶s̶i̶o̶n̶s̶ ̶o̶f̶ ̶t̶h̶e̶ ̶m̶a̶t̶e̶r̶i̶a̶l̶s̶,̶ h̶o̶w̶e̶v̶e̶r̶ ̶t̶h̶e̶ ̶v̶a̶l̶u̶e̶s̶ ̶a̶r̶e̶ ̶g̶o̶n̶e̶ ̶w̶h̶e̶n̶ ̶t̶h̶e̶y̶ ̶a̶r̶e̶ ̶l̶o̶a̶d̶e̶d̶ ̶a̶g̶a̶i̶n. (Unity did think we had changed the materials, because we did the mistake of marking them all as dirty). However, Unity keeps overwriting the changes every time we: load a scene, save a scene etc. (which is all the time). To solve this we basically have had to hook up to a lot of events in Unity just to write our values back into the materials all the time.

If the materials had kept their renderQueue values, this huge lot of code and headache could have disappeared. (Or even better, ordered surfaces according to their z-value in camera-space, which is supposed to be the standard)

If anyone has an elegant solution, or less annoying workaround than continuously rewriting material renderQueue values, please share with us!

So far this rendering problem is taking up between 20-40% of our game level design time.

ps. we have a lot of layers to handle, so writing custom shaders just to cope with depth (what we did first) is not an option

I wish this request had gained more traction. It’s really important for 2D-games using parallax effects.


You do not have to rewrite the renderQueue “continuously”, only at object startup. So the best method you can use is to attach a script to an object/hierarchy that modifies the renderQueue (recursively) in Awake() or Start().

Yes, it would help to have a slider in a material with which to control the shader’s queue, and which can be stored with the material. But remember that every specific render queue value needs a (mostly) separate render pass, which can quickly kill your performance.

Sorting transparent objects correctly is a problem that has been unsolved for the last 40 years of computer graphics, at least for the general case.
The only real solution (at least down to polygons) would be to sort the polygons individually, instead of per object mesh. However, this is the best way to cripple your performance, since in the worst case you’d need material changes after each polygon. And even when using this sorting method, the algorithm will still fail if your tessellation is weak.
Or you need to use raytracing instead of scanline or z-buffer rendering altogether.
But for the standard rendering methods as used by OpenGL/DirectX, no simple solution exists.

Yes, you’ll often get better results when sorting by screenspace-z instead of camera distance (and it’s actually faster to compute), but again, this is only true for special cases, such as the parallel planes discussed in the question you linked to. Simply rotating these planes by 90 degrees will already produce incorrect results for the z-sorting, and better results for the discance sorting. The same is often true for non-planar 3D objects.
(EDIT: Note especially that the z-sorting method is sensitive to camera rotations, so you would get flipping artefacts simply by looking around. These artefacts are prevented by the distance sorting)

Judging from your screenshot, you are only dealing with textures that have either fully opaque or fully transparent pixels. Note for these cases, a Cutout shader is what you want, instead of a normal alpha blending shader. Cutout shaders write to the z-buffer, don’t need sorting, and render correctly even in cases where objects penetrate each other. Their disadvantage is of course that they can’t deal with semi-transparent objects/pixels.