I decided it would be nice to use isTouchingLayers for a ground check in my new game, replacing the OverlapCircleNonAlloc I was using before. isTouchingLayers was described at Unite 2015 as a way to check if the player is on the ground.
The issue is that it still returns true in the physics update after a jump is initiated, which has the player in a state of jumping and on the ground, which I’d like to avoid.
How are you guys using isTouchingLayers to accurately reflect whether the player is grounded before jumping?
The link shows you how someone else is using isTouchingLayers to accurately reflect whether the player is grounded before jumping. Isn’t that what you asked?
You are also saying that grounded returns true when you aren’t touching the layer? Then you must be assigning specific layers to the wrong objects, or, you aren’t utilizing your isTouchingLayers implementation correctly.
Can you be more specific if none of this addresses your issue? Possibly some code??
From the documentation: “It is important to understand that checking if colliders are touching or not is performed against the last physics system update i.e. the state of touching colliders at that time.”
Here’s the scenario when the player is on the ground and you press the jump input:
Physics Update 1
Check if grounded: Yes, because the player was touching the ground layer in the previous physics update (Physics Update 0)
Apply the jump physics (player isn’t moved by Box2D until the next physics update)
Allow other things that can happen when the player is on the ground
Physics Update 2
Check if grounded: Yes, because the player was touching the ground layer in Physics Update 1
Calculate the new physics position and move the player there
Allow other things that can happen when the player is on the ground (even though they’re not on the ground anymore)
You can see that during Physics Update 2, the player is considered on the ground even though the physics system has now moved them away from touching the ground layer. This is how the isTouchingLayers() is designed to work, so I’m wondering how other people are adapting to this. Have you used it?
I think you’re perhaps getting confused by when we update the physics. The call order is this:
MonoBehaviour::FixedUpdate
Internal FixedUpdate (including physics)
This means you get the script fixed-update then, immediately afterwards, the physics update occurs.
So in your example; In Physics Update 1 you ‘apply the jump physics’ which presumably causes the objects to not be touching, then immediately after all the script fixed-updates have occurred, the physics is updated. At this point, given your example, the objects would be flagged as not touching. Now in Physics Update 2, which is the script fixed-update, checking IsTouchingXXX returns ‘false’. Note that immediately after that, the physics is updated as usual.
In other words, your ‘Physics Update’ is the script fixed-update and not the internal physics fixed-update.
This is not specific to the IsTouchingXXX. Contacts are not calculated until the next physics fixed-update, nor are any of the physics objects moved etc.
Thank you, that does clear up part of it for me. The remaining part that still has me confused is that isTouchingLayers will, about half the time, still return true for the FixedUpdate subsequent to when I ‘apply the jump physics’. For example, in FixedUpdate:
standing = collider2d.isTouchingLayers(LayerMask.GetMask("Ground");
if (!standing) {
print ("not standing");
}
if (jumpInput && standing) {
print ("starting to jump " + transform.position.y);
ApplyJump ();
}
Will sometimes, but not always, print:
So, between the 2 ‘starting to jump’ lines, the physics is updated and the transform’s position moves up along y. However, standing is still true, so it prints ‘starting to jump’ again. Maybe isTouchingLayers does this because of variability of when jumpInput is supplied, which is read in the Update function?
IsTouchingLayers just checks for Box2D contacts between objects on the specified layer(s); it doesn’t actually perform any work apart from looking for those contacts. If it returns true then Box2D has an active contact.
All I can say, is that if you’re seeing variance then Box2D is saying that although you jumped (using a force?), you didn’t separate the ‘collider2d’ from whatever collider(s) are on the layer ‘Ground’ during the next fixed-update. As an experiment, try using an higher force, even an impulse force.
The only other option here is to submitted a bug-case so that I can take a look.
@MelvMay : What is the internal implementation of the IsTouchingLayers? For 2D player controller What is the best-optimized approach doing RaycastHit2D in multiple directions (Down, right/left) or us IsTouchingLayers?
We maintain a set of live contacts. When you call “IsTouching” or “IsTouchingLayers” we simply iterate that set of contacts looking for contacts that meet the criteria. In the case of checking for layers, we check if the contacted GameObject is set to the specified layer(s).
Those two are different things altogether so hard to compare. Raycast (and any other query) perform the collision detection there and then for the query whereas the other methods above check existing contacts that already have been calculated. Queries are for speculative searching for contacts whereas IsTouching(Layers) is for existing contacts.
If you’re searching up/down/left/right then it doesn’t sound like you’re checking for existing contacts but contacts in certain directions in the future if you move in that direction.
Also, try to consider shape-casts (circlecast etc) rather than simple raycasts. Additionally, for rough point overlaps you can use OverlapPoint.
Hi !
I ran into a similar problem, I enable a previously disabled gameObject and check if it is touching any layers before setting active another gameObject back to true. The problem is that sometimes it records not touching anything even though it is. I think the problem is that the first gameObject is set active and has his fixed update with the isTouchingLayers() check before the unity physics update so of course since the gameObject wasn’t active there is no recorded contacts. Is there a way to quickly do the physics calculation for this particular gameObject or another way to see if the BoxCollider2D attached is touching anything ?
Thank you for your answers
I have the same issue where the isTouching (or GetContacts) is still reporting active contacts a frame later. I just did a test with applying an impulse force of (0, 1000) and it still reports a contact the next frame when there is more than 10 units space between the reported contact and the rigidbody.
The reported contact points are enabled, have a relative velocity of (0.0, -1000.0) and a separation value of -0.005000353.
I know, and used, the overlap colliders and raycasting alternatives, but the contact points give me the information I need in my case. It seems, according to your description @MelvMay , that this isn’t the intended behavior.
If this is a render frame then you should know that physics isn’t running its simulation per-frame by default unless you’re doing that. You could have many render frames between simulation steps. This is what FixedUpdate/Update is all about. Contacts are created/updated/destroyed when the simulation steps.
Well if you’re getting contacts then they exist in the physics system (Box2D). That’s about all I can say really. Note that you can also see them in the inspector for any Rigidbody2D/Collider2D in “Info > Contacts”.
I did turn on the contact in the preferences, and that is actually quite interesting because they don’t match the returned points (though I’m not sure what influence pausing unity has on when these are updated / rendered):
This is showing the green lines drawn from the code above, the right frame is one where I expect not contact but 2 are reported (note that the contact gizmo’s are not drawn that frame)
Impossible to say for sure what’s going on here. I can say that if you retrieve contacts in fixed-update, that is called immediately prior to the fixed-update simulation happening so if you pause the editor, contacts will be destroyed during the simulation after you’ve retrieved contacts and the gizmos just do “GetContacts” and render them so they’ll be gone at that point, assuming they need to be destroyed.
I assumed contact points were structs so my local copies would not change over time. Either way the editor gizmo contact points and my drawn contact points only line up when the rigidbody is stationary. Pausing the editor and stepping trough the frames clearly shows the difference.
It feels to me that when two rigidbodies are more than 10 units apart, no contact points should be reported. So either Unity is, or I am doing something wrong.