Separate Physics and Rendering layers

https://feedback.unity3d.com/suggestions/separate-physics-and-render-laye

It is a longstanding Unity issue. Layers are used for both Physics and Rendering. This is probably fine for small games but when you start creating custom rendering buffers you can really run into corners. Basically for few games I made in the past (Hard West, Ancient Space) I just couldn’t do certain graphics features due this limitation. I just had to cut them out.

The only solution for separating graphics and physics layers is to duplicate your gameobject and assign different layers to each duplicate. That is not acceptable in a complex game due to performance reasons (you are adding considerably large amount of new gameobjects). Imagine doing that for 20 skinned characters (I am not even mentioning the need of syncing animations for each duplicate). I just don’t see a reason why layers would be shared between physics and rendering. And also, why one object can be only assigned to one layer? That is a mystery to me.

17 Likes

OK, that’s an actual production example explaining why this is important for me.

That video is time-stamped.

During a production of this game we had two “dimensions”, only one visible at a time. One called Normal and another one called Nightmare. We were switching rendering between dimensions by assigning different objects to either Normal or Nightmare layers. So if Nightmare dimension was being shown, Nightmare layer was rendered and Normal was excluded. This was also used for physics for obvious reasons - we didn’t want to do raycasts or interact with currently invisible layer. That worked well.

What was not actually working well were custom render buffers. I wanted to implement screen-space subsurface scattering for characters and foliage. Normally I would assign all foliage to a custom layer called Foliage, render SSS textures into a buffer and do some light calculations in a shader. Easy. What I couldn’t actually do is exactly that - assign all foliage to a new layer. This is because we already had to assign all gameobjects to either Normal or Nightmare layers.

But I could have used tags, right? Not really as they were already being used for gameplay purposes.These days I think I could achieve similar results with command buffers but they significantly more time to setup due to lack of proper examples and something you just need that simplicity which layers offer.

Another issue we had and I don’t see that being solved with latest Unity is assigning objects to different physics layers. Basically, we wanted to trigger destruction for each object being touched by a weapon’s bullet passing by. I wanted to use physics for ragdolls of debris. Super easy but for various gameplay reasons I could not have any colliders on Normal nor Nightmare layer. This means I needed my own physics layer to which I would assign my colliders to interact with.

So the issue was that we had this Normal and Nightmare dimensions which we were switching between. Let’s just say I have a destroyable crate which is seen only in Normal dimension and is constructed out of 6 individual pieces

  • I had to create 6 gameobjects rendering components only, assign them to Normal layer
  • To each piece I had to add a child object with colliders only and assign them to Normal_VFX layer so that I would be interacting with proper physics objects
  • That’s a total of 12 gameobjects in best case scenario

Let’s say that now I need to use the same destroyable crate in both Normal and Nightmare dimensions. I would have to:

  • Create 6 gameobjects with rendering components only, assign them to Normal Layer
  • Create 6 child objects with colliders and assign them to Normal_VFX layer
  • Create 6 child objects with rendering components only, assign them to Nightmare layer
  • Create 6 child objects with colliders and assign them to Nightmare_VFX layer
  • That’s a total of 24 gameobjects

All these problems would be solved if we could only assign the same gameobject to few different layers or have separate rendering and physics layers.

9 Likes

+1 for this

4 Likes

Can we get an official Unity clarification on on that? Normally I would not push so hard for an official response but Unity staff always suggests putting a feedback ticket. One was created in 2011 and has 278 votes so far so what else there is for me to do?

4 Likes

Bump.

This has been a requested feature since 2011, and they’ve ignored it repeatedly. I doubt it’s gonna happen, though I’d certainly love for this to be added. It makes zero sense that physics and rendering are linked like they are.

But physics and rendering isn’t coupled - only in your mind. Just abstract it with your own naming scheme.

Not sure if this is irony or not so forgive me if it is :wink:

Physics and graphics are coupled simply because you use the same layer system for both camera culling and physics channels. If you need a gameobject to be placed in Layer A which would be used for camera culling but you need it to be using Layer B for physics, both at the same time, you need to clone this gameobject, place one copy into Layer A, and another one to Layer B. This way one can be culled by the camera and the other can be used for physics. Like I said - not a problem for a small game but for something larger with complex physics and culling setup aaaand large amount of gameobjects, this spins out of control quite fast. Have I mentioned that maximum amount of layers allowed is 32?

2 Likes

Don’t forget light culling. Although you only get (4?) of those to play with.

Not sure why Transparent FX is sitting next to Ignore Raycast. They must really get along like best buds. I’m sure Unity was extremely pro about this part!

Why can’t we even change 8 of them? That alone would help a lot. But it is what I meant by being able to repurpose their meanings. The entire thing is illogical and I think, for Unity’s convenience more than ours.

HAVING SAID THAT… I don’t actually use more than a handful of layers in all the projects I’ve used, big and small. Because I generally design things that don’t rely on Unity systems all that much (it is usually just used for physics only).

Is there anyone watching physics forums? What’s the purpose of it if noone from Unity staff is watching it? It looks quite dead. Sorry if I am being too rude but it seems that some Unity regions have better community support then others. For example communicating with particle system developers is really easy and straight-forward whilst getting any information on terrain system or physics is basically impossible.
@Adam-Mechtley @MortenSkaaning @yant

@hippocoder
Would you mind sharing your secret on how did you abstract from Unity’s layers?

1 Like

Well I just reuse the blank ones, ‘reserved’ ones and have a class of constants instead so the actual names don’t matter. I admit it’s of limited use to some, and it’s stupid that Unity mixes physics and rendering. Although I’m struggling to find a situation where I can fill it up - even in complex titles over the last 8 years or so I’ve been doing stuff in Unity.

Even if I don’t have a problem with it, I recognise it’s a problem for a lot of people and the design is absolute nonsense. Water layer mandatory? Really, Unity? since when do people even desire Unity’s water? or use it? These layers should never, ever be shared with Unity internals anyway.

I guess it basically is one of those neglected areas of Unity. It works fine, it’s just lower priority I suppose.

2 Likes

Just found this thread and wanted to add one voice here… What a strange design decision …

1 Like

It’s a really good (real-world) example that varfare gave, why separating layers makes sense. Too bad Unity staff isn’t responding or perhaps not even looking at this thread.

2 Likes

I’m 100% sure they know but there is not much user pain from this. Just vague examples and plenty of workarounds. You can see why it’s not high priority.

I’ve just had to moderate laurent posting links to this thread in the beta forum for 2017.3 (which I deleted cos there’s a time and a place for nagging and it’s totally unrelated to beta), there needs to be more actual ship stopping issues for it to get bumped up in priority.

1 Like

Hi,

Yes, we’ve been using this shared layering system for a while now. When introduced, it used to be a natural and a pretty convenient tool in helping developers to learn parts of Unity, where features that looked similar were actually shared so one had to learn the same concept only once and for all. It’s clear though, that the approach is fairly limiting these days. As it was noticed above in the thread, we don’t allow for assigning GOs to multiple layers. We also use bitmasks for filtering, so there is a natural limit to the amount of layers we can afford, being the amount of bits a certain type of a processor word has. It’s 32 at the moment, and could probably be bumped up to 64 should we allow to deprecate a few platforms. I reckon it’s a stretch anyways.

That said, there were a number of options considered. One particular interesting was to make it so that each physics object could belong to multiple layers at the very same time. A simple, mask-based, physics query could then be used to discover all objects visible to camera, but not NPC-produced projectiles for example. At physics team, we were discussing that with the editor folks to align our roadmaps and make sure we don’t do the same work twice but in a slightly different way. As a result, it’s clear we need to start working on that, but the dates are not quite finalised yet. That’s why there might be some perception of the problem being ignored while it’s definitely not the case at all. Going to get that done for sure.

Hope that helps.

Anthony,
R&D Physics

13 Likes

I would just like to set up what is colliding with what layer in code as well, seems a bit painful just using the editor.

Yes, sure. I guess two objects could be passed to the broadphase for the collision computation once their masks being &-ed give a non-zero value (i.e. if they belong to at least one common layer). The old pair-wise collision ignorance should be remained as is I think. That would give the full scripting coverage as well as full editor coverage. Makes sense?

1 Like

Sure, if the performance is the same, I’m happy.

It’s an indirection right now to do a linear table lookup alongside a few bitwise operations. Should be just a bitwise operation in the future, so I’d expect it to perform even better.

2 Likes

Sure it does! Thank you for the reply.

Would it be safe to assume that this might appear sometime next year? Or is it something in the back of the backlog?