2D Collision between two moving kinematic rigidbodies with character controllers?

Most 2d character controllers I’ve seen use the player’s movement distance to direct casts that are used for collision. This works when colliding against static colliders, but doesn’t take into account if the player is standing still and a solid moving object collides with him.

What options are there for this? I know it’s not a simple answer, but I’m more looking for possible very broad conceptual solutions.

The only I can think of is an elaborate set of static casts for each influenceable entity that originate from within the collider of the entity. It involves substepping the movement of ALL moving objects into multiple smaller chunks per frame that are shorter than the length of the shortest cast to prevent tunneling, and then after each movement chunk, each entity runs its collision casts. It works but it’s a pain in the ass having to break down physics into multiple checks per entity per frame and it’s not exactly ideal for performance.

Can anyone think of any alternative ways?


This is my current process (at an extreme speed for demonstration), but its major limitation is that all moving entities that can physically collide with each other must have their movement divided each frame by the length of the shortest raycast to prevent tunneling (in this case half the width of the player as seen above) and have the collisions ran after each subframe step, otherwise the solid object will tunnel through and the ray will spawn inside the moving terrain and the entity will fail to adjust itself and ultimately collisions will fail.

Another limitation is it requires objects to move in a specific order. In this case the moving terrain has to move first and the player has to move afterwards in order for the raycasts to detect the box's new subframe collider position and then adjust itself. And then the box moves again and the player adjusts itself again, etc for however many subframes are required. All entities follow a certain order of influence that they are updated each step. This also means all collision interactions are essentially infinite weight (moving terrain) vs 0 weight (entity that is pushed by moving terrain by adjusting itself when it overlaps) as shown above. Another possible interaction is static terrain (infinite weight) vs moving terrain (0 weight).

Again it works but it's a major pain in terms of framework because I have to divide each update step into a pre-subframe, subframe, and post-subframe for code that happens before and after physics, which is just a headache in general. It also means I need to process all entities in a custom update manager to get the proper timings and orders handled the way they need to, so I'm very interested in seeing if anyone can think of alternatives. I'm also fairly certain it's not possible using dynamic rigidbody2d physics or with rigidbody2ds in general because each frame requires the transform to update mid-frame, which modifying the rigidbody2d doesn't do until the unity physics step, which could theoretically be doable if I cranked the fixed timestep up to 240+ fps but that's probably not a reasonable option.

I'm open to any and all suggestions for either optimizations or alternatives or anything. Also if you have any questions let me know, or if you notice I'm doing something idiotic for a dumb reason then please definitely let me know.

I've attempted to use dynamic Rigidbody2D as a means of a preliminary collision for my raycasts to then handle the positional nuance, but it has severe limitations.

For example, here is my character moving very fast over this sharp bump. It traverses it completely fine with a kinematic rigidbody (since it ignores Box2D physics collisions), but with a dynamic rigidbody as a preliminary collision, you can see it going from frame 1 to frame 2 inside the physics step, it collides with the sharp bump and halts the position of the character early, thus ruining the proper movement.
Can anyone think of a workaround? Because using a dynamic rigidbody can make other issues easier, but if it can't do this then I'm back to asking what I asked in my last post in this thread.

The only reason you’d contact that “bump” would be if it were two separate colliders (technically two separate physics shapes) which will never produce a continuous surface. An EdgeCollider2D produces separate shapes but they are virtually joined together to produce a single continuous surface using “adjacent edges”. You can also join multiple EdgeCollider2D using the adjacent edges yourself. You can see that being used here to deal with extreme discontinuities (steps):

Also, whilst I don’t want to get drawn into design discussions on character controllers, this basic example of stepped Kinematic motion might have some API usage that could be useful where it ensures its not overlapping using “Physics2D.Distance”. Whilst it doesn’t demonstrate the surrounding geometry moving, in theory it wouldn’t have a problem with that with a few potential modifications:

When using the CompositeCollider2D in Outline mode it produces Edges which are again joined using adjacent edges. This stops discontinuities and produces a continuous etc.

This is also supported by the new CustomColilder2D & PhysicsShape2D for edges

That all said, you won’t get the above at all in a single physics simulation step. A body won’t move around corners or follow paths like that. It’d take multiple simulation steps; even if it did, it’d have to moving quick to move like that at the default 1/50th sec.

You’re right that for some reason I built that corner with 2 different surfaces, though even a continuous surface produces undesirable results for the reason you stated in your last paragraph. However I realized I was doing movement wrong to begin with and apparently I don’t know how rigidbody2d movement works in general even though I thought it was pretty cut and dry.

How am I supposed to move a rigidbody2d in FixedUpdate while also moving the transform multiple times that frame for preliminary collision checks?

I thought I could get away with it by just caching the transform position at the start of the frame and setting it back before the rigidbody2d.MovePosition(targetPos) call, but apparently I can’t and also realized a relevant factor is when transforms are synced. I had autosynctransforms on, but realized that’s wrong, but I also don’t know why it would produce wrong results if the rigidbody would still get autosynced to the cached transform.position change before rigidbody.MovePosition(targetPos) call.

This is my rigidbody logic:

Vector3 ogPos = transform.position;
Move(); //moves transform.position + transform.rotation multiple times
Vector3 targetPos = transform.position; //target position
transform.position = ogPos; //return transform to original position
//Physics2D.SyncTransforms(); //this ordering causes movement to occur
//Physics2D.SyncTransforms(); //this ordering causes no movement to occur

But with autosynctransforms off, I’m confused why the SyncTransforms() ordering above matters, since to my understanding MovePosition doesn’t actually move the rigidbody position until the internal physics step anyway, which doesn’t happen until after FixedUpdate, right? So why would the ordering above produce different results? Especially since the targetPos should be unaffected anyway.

What am I missing? I’m sure it’s obvious but I for some reason can’t visualize it.

Either way, I thought of a way to avoid my bump dilemma: by having the collider that interacts with box2d be only on the top half of the character so the ground rays can do the proper grounded position checking while the top half collides with upcoming solids. This works for the bump case and various other cases, but it comes with 1 drawback: When it hits corners like this it gets squished into the ground.
Is there some way to tell box2d to only produce collision normals in a direction clamped to left or right or opposite of velocity? I’m sure the answer is no, but is there some alternative way I could achieve the ideal position above? It’s kind of a ridiculous ask though I understand. I’m probably doomed to just using my raycast/boxcast system.

***I found a way to do it but it might not be worth trying to make work for all cases. If I instead use kinematic mode and then get the collision point in OnColliderEnter2D, i can find the intersection to get the offset of where the transform should be to collide seamlessly
but I think I’m just trying to achieve precision that can ultimately be designed around instead of trying to make perfect.

You don’t. Changing the Transform is doing physics backwards.

AutoSyncTransforms / SyncTransforms is not a future looking “feature”, it was solely there for backwards compatibility when the Unity transform system changed years ago. You cannot use it to brute force your behaviour. Turn AutoSyncTransform off (it’s off by default), never use SyncTransforms.

It happens when the simulation steps yes. The default SimulationMode is the FixedUpdate yes but it can be per-frame or Script (manual).

I’m honestly not really following and generally I try not to get dragged into devs-writing-controllers because in nearly all cases, it’s very custom and trying to change how the physics system works is often a distraction for the controller logic being poor. In the end, if you perform queries to determine where the controller is, this has absolutely nothing to do with the simulation step.

I’m not following but I’m also not sure why you’d want this. If you want to clamp the direction then do it yourself. I presume all the above is Kinematic so you don’t get any collision response from contacts anyway. If it’s dynamic then that doesn’t sound like a good body-type for a controller.

Did you look at the Physics2D.Distance and the example above. This allows you to stay in contact with a surface should you wish to.

Sorry yeah I understand some people spend even years making their own character controllers so it’s not exactly an easy topic to get into the nitty gritty specifics of because there are thousands of edge cases of any controller that needs to be accounted for that’s often specific to each project, so it’s hard to really explain my issues without examples and even then there’s no real easy answer.

Though it concerns me when you say autosync transforms / synctransforms are on the way out and shouldn’t be used, because my controller depends on mid-frame transform movement and rotation and then immediately syncing so multiple raycasts/boxcasts per frame can detect the new collider position to move properly.
Here’s a video of my current system using raycasts for ground checks and boxcasts for side checks.


Even though that movement is unrealistically fast, as you can see there’s some pretty drastic transform rotation happening mid-frame that seems impossible to precisely do with rigidbody (particularly the 3rd and 5th example) even if I manually run physics2d.simulate like 10 times per fixedupdate so each object moves at most like 0.1 units per simulation in addition to the raycasting afterward for repositioning. But that seems like a big performance loss compared to moving transforms and just doing raycasts, but maybe I just don’t fully understand it.

So when you imply that it’s not future-looking and might no longer exist one day and shouldn’t be used, I don’t know how I’m going to replicate it by relying on just the physics step if it becomes my only option. Which is why I made this post to potentially see if anyone had any ideas. And yea I had looked at the PhysicsExamples2D github project and specifically that example scene you referenced before I made this thread and I wasn’t sure how I could use Physics2D.Distance to do what my video demos, specifically because of so much nonlinear and rotational movement happening within a frame.

I never said that though. I said they were implemented for backwards compatibility behaviour which has been spoken about many many times on these forums. They are not for what you are using them for at all.

Here’s one of the many explanations of why both 2D/3D physics had to implement it. It’s NOT a feature. It’s off by default. It’s there for other reasons and it isn’t something that you’d use to write a character controller. It landed when the Transforms were changed all the way back when the job-system implementation required a new way Transforms worked. That broke physics behaviours, we were forced to implement this as a backwards compatibility otherwise the Transform change would break physics. It wasn’t a nice position to be in either but we had no choice. The end result is a much faster Transform system though that has no side-effects when changing Transforms.

You don’t even need it if you DON’T change Transforms. You should NEVER change transforms when using physics.

I can only tell you facts, I am not here though telling you how to implement your character controller. As I’ve already said, I am honestly not going to get into a design of that because it’d likely take a lot of time for me to understand what you’re doing first and this isn’t why I’m here on the forums if I’m honest. I know you’re not asking for that but I think it worth saying again.

I can tell you exactly what 2D physics features do, how you choose to use them is obviously up to you. I do not want to discourage you though; if you have any specific questions, I’ll be more than glad to answer them but I cannot tell you how to best do what you want to do above. I will add that I’m also on annual leave right now so my responses may be a little slow; normally I’m on these forums a lot.