Note: A late edit provides an example C# PID Steering controller at the bottom of this post
There’s the easy way out, another way, and the hard way.
The easy way out is to rethink what a waypoint is. Instead of making a single point, make a series of very small paths that trace the curve you want. You already have that working, it is just a single point with a sharp turn. Make it a series of very little turns on short paths tracing the curve, and you’re done. You could either have two kinds of “waypoints”, several that aren’t actual target points, one that is (midway through the turn), and several more that complete the turn (but aren’t target points).
Another way is very related to the hard way, but it inverts the perspective. The code you posted indicates you’re thinking in terms of a controller moving the character. You could, instead, think in terms of the character navigating the waypoints, giving it the knowledge of how to turn through a waypoint on a curve. There may be assets in the store that do this kind of thing already, I don’t know.
Now, the hard way.
Thinking through this, I have to assume that each waypoint is a straight line between points, and your current approach depicts this with immediate rotation to the orientation of the next line (that which connects two waypoints).
What occurs to me is that if the robot rotates more realistically, it begins its rotation when and only when it is close to the next waypoint. I assume that the methods you’ve tried merely applied a stepwise approach to the rotation the same as you have with the change in position, which means the rotation was happening gradually throughout the travel from one point to the next (if it was completing at all). Step is apparently computed to work for position, and probably wouldn’t apply to a parameter for, say, Quaternion’s static function “RotateTowards”, the counterpart to “MoveTowards” for Vector3, and since I didn’t see your attempts with slerp or lerp, I’m assuming they failed because you haven’t treated rotation independently from movement either.
If the rotation begins before the robot hits the waypoint, the robot will appear to be traveling straight when in fact it is turning (it won’t move in the direction it is facing as the turn gradually changes orientation). My point here is that the robot would travel a small curve through the waypoint, while your code for motion is only going to change at a point, not along a curve, similarly to how your rotation is snapping at the waypoint.
To travel a curve you have two choices, as in real life. You would plot a curve that mates one line (the end of the current waypoint) to the next line (of the next waypoint) such that the sharp point at the waypoint is never actually touched by the wheel, but a curve passes by it as the robot turns. The other option is for it to swerve a bit to anticipate the turn so the curve passes over the exact waypoint. The latter is more difficult mathematically, but possible. The point is that while you are contemplating a gradual rotation, you’ll end up wanting to also make a gradual curved adjustment to the path through the waypoint.
The more important step now, however, is to establish a design that will do either (you can adjust to your liking as you experiment). My sense, based on the little code I see, is that you need help fashioning a plan that lets you implement these notions.
If you experimented with Quaternion RotateTowards, you probably didn’t think to wait until your robot is close to the waypoint, and establish the number of steps to the rotation, which is independent of the steps in your path (the same applies to slerp and lerp).
RotateTowards, from the documentation:
public static Quaternion RotateTowards( Quaternion from,
Quaternion to,
float maxDegreesDelta);
The maxDegreesDelta may be misleading by name, it is the portion of the angle to take between from and to. If you knew you were about to rotate 60 degrees, and you wanted to take 10 steps to do that, you’d use 6 degrees as this parameter (maybe a touch more if you like, the function clamps to avoid overshoot). This is analogous to step, but not compatible with the step parameter you’re using for “forward” movement. You need an independent calculation.
Now, how much is there to turn, and when do you start the rotation?
Since I don’t see how you’re calculating step, I can only describe what you’ll need to know when, in MoveCharacter, you want the rotation to begin. You’re not going to rotate to a new angle throughout all of the MoveCharacter steps. At present the angle is merely switching to the next Quaternion from Points (when CurrentPosition changes). In reality, I’m sure you realize, you’re updating the quaternion at every step, and it isn’t necessary. That has no visible consequence because the repeated rotation assignments for any given “step” changes nothing until CurrentPosition changes, but it wastes a little compute power (which is battery life).
Even with your current approach (that snaps to the new orientation), you only needed to change rotation when CurrentPosition changed, but that’s not being calculated here.
Where you calculate the step factor and decide when CurrentPosition changes, I have to assume, you’re able to tell how far your robot is from the next waypoint. It seems obvious that would be
( Points[ CurrentPosition+1 ].transform.position - Character.transform.position ).magnitude;
Though it might be better to have a “NextPosition” member that’s clamped so it doesn’t run past the end of the Points array.
Magnitude is an expensive calculation, so you may want to preserve the distance you’ve noted as you approach the next waypoint (and thus have not yet changed CurrentPosition). I assume you’re using distance (magnitude) to the next waypoint, but I don’t have that code to know.
However you implement it, what you require is the ability to sense when you are close to a waypoint on approach so as to trigger the start of the rotation.
To keep things clean, I prefer such a thing to be its own function. Your existing MoveCharacter may be comprehensible now, but this is going to clutter it if you implement it inline, so…( Note: I’m not testing any code here, this is psuedo code or proposed C# code, you’ll have to handle any typo or syntax issues - this is just to illustrate the thought )
void MoveCharacter()
{
ProcessRotation();
Character.transform.position = Vector3.MoveTowards......;
....
}
I am suggesting you stop setting the rotation with each step in MoveCharacter, and call a function to handle the rotation work.
In ProcessRotation:
void ProcessRotation()
{
if ( rotating == false )
{ // if we aren't in range, this is done for now
if ( IsWithinWaypointRange() == false ) return;
// indicate rotation is happening
rotating = true;
InitializeRotation();
}
if ( StepRotation() == false ) rotating = false;
}
This is an executive function (again, illustrative psuedo code, not something you can paste and use. I’m not testing anything here, you may need to flesh that out a bit). What you see is a plan, not a lot of details. The functions handle the details (and you’ll have to write them). If you think about these function names and what they should do, you’ll see the plan emerging.
bool IsWithinWaypointRange()
{
if ( distanceToWaypoint < rotationMargin ) return true;
return false;
}
Remember I suggested you’d want to preserve the distance your current presumed method of switching waypoints probably calculates? distanceToWaypoint is what I presume to be a float that tracks that distance. It is the magnitude of a Vector3 created by subtracting the next waypoint position from the character’s current position. If you’re using some other means, then IsWithinWaypointRange may have to use magnitude of a calculated approach vector. I also propose there’s a float member rotationMargin which stores how far away from a waypoint the rotation should begin (a radius around the waypoint).
void InitializeRotation()
{
Vector3 nextWaypointAngles = Points[ CurrentPosition + 1 ].transform.rotation.eulerAngles;
Vector3 curWaypointAngles = Points[ CurrentPosition ].transform.rotation.eulerAngles;
// Note in commentary below
float angleToTurn = nextWaypointAngles.y - curWaypointAngles.y;
.....
}
I’ve not completed this function, it is a proposal. This should initialize your rotation something like what you’ve done to initialize step for motion. I’m merely showing you how to start. I assumed here that this is a rotation on the Y axis (that the bot is on the floor, traveling waypoints on the xz plane). Adjust to your situation. The idea is to determine the angle the bot is going to turn.
Let’s say the current waypoint is at 30 degrees (from eulerAngles), and the next waypoint is at 90 degrees orientation, the bot will make a 60 degree turn. It begins when the bot is within rotationMargin of the upcoming waypoint, and should continue until the bot is rotationMargin away from that waypoint (so at the middle of the turn, at 30 degrees, the bot is on the waypoint).
Quaternion.RotateTowards takes the two quaternions indicating the “from” and “to” orientations, and a “step” parameter showing how much of an angle to turn in the current “step” of the rotation, much like the step parameter shows how far to move in the “MoveTowards” calculation. The difference is that the rotation is not starting at the same time (and therefore not at the same position) as the step for motion, and it is an angle, not a distance, so you need an independent value to track this.
I have no code for your step calculation, but I presume you’ve calculated the distance between waypoints, have a rate in mind, and used Time.deltaTime to calculate step as updates happen. The same kind of thing is used for the rotation, but instead of calculating a speed for distance, you’re calculating a speed for angles of rotation in degrees. For the moment I’ll leave it to you to determine how to synchronize these two related but separate events, and what member variables to create to do this (it seems you have the skills, demonstrated by getting this far as it is, just keep your wits about you).
The StepRotation function I propose (called in the executive function ProcessRotation) does for rotation what MoveCharacter is doing for motion towards the waypoint, adjusting rotation using RotateTowards (though all of this can be applied to lerp or slerp if you prefer their results and parameters).The executive ProcessRotation initiates the rotation when on approach, calls the StepRotation throughout the turn, and stops calling the StepRotation when StepRotation returns false (indicating the rotation completed).
You should no longer use the rotation coming from Points[ CurrentPosition ] for “RotateToward”, but have members that indicate the “From” and “To” Quaternions as the robot passes through a waypoint (and CurrentPosition increments during that phase).
Notice, too, that in the executive “ProcessRotation” I propose, StepRotation returns false to indicate the rotation has completed, which it should be able to sense when all of the rotation steps have been exhausted.
The point I’m driving at here is that you need a separate design to initiate, track and conclude rotation. If you try it, you’ll eventually get to a point where you observe the bot is traveling straight along the path to a waypoint, begins rotating when it is within the margin of the waypoint, and continues rotating until it is just beyond the margin of the waypoint. It is then than you’ll probably want to consider the refinement of curving the path.
At that stage you can begin considering how the rotation can feed information back to MoveCharacter where you set the Character’s position such that the position tracks a curve through the waypoint instead of an immediate change of direction implied by a point at the moment CurrentPosition is incremented.
If you were to choose to have the bot do this instead of controlling the bot from a controller class, you’d have to do something similar anyway. However, if you think about how a person navigates, it could be a little simpler. A person on a unicycle, for example, is constantly re-evaluating a target. We may have a target in mind, sure, but we’re not pre-calculating our path. We are constantly re-evaluating our path, which accounts for any minor perturbations that happen as we balance. This means you can merely start each cycle by orienting toward the target waypoint, moving toward it at the rate indicated, then repeat by re-evaluating at each step (update) until close to the target. If you think about that a moment, you’ll notice you could nudge the bot off course, but it would re-orient and return to the path toward the waypoint. When approaching a target (inside a margin around a waypoint), a person would consider where they are going next. If a person approaches a target point, knowing a new target follows, the mind considers a path through the immediate target that aligns towards the next target. Orientation is adjust to plot a turn (a curve) through the target (waypoint) which orients toward the next waypoint, but without a very specific alignment, just a general ‘left/right - sharp or not sharp turn’ in the general direction. A person is just concerned with getting through the target (waypoint) in a way that is oriented toward the next waypoint, not exactly aimed at that waypoint. As that transitions to a phase where that new waypoint is the target, the process begins again - orient toward the target and proceed at a particular rate, re-evaluate at each moment (each update), and continue. This somewhat “naturally” implies a curved path through the waypoint, and can end up being a bit simpler than purely geometric calculations. It is also generally applicable to finding one’s way around, which means it could be coded somewhat generically for re-use in future projects. This can also offer the added potential of minor perturbations from obstacles and other influences (like pebbles, water, wind, opponents, etc.). In other words, this is a path finding approach rather than a path following approach. You might find it easier to contemplate and implement.
Edit: After a delay of several days when I didn’t notice comments from the OP, I found a comment that PID was being a problem, so I’ve fashioned an example in C# (non-Unity, post code into a new C# Console project to test and evaluate ) to illustrate how PID can steer to some waypoints.
When I run this, it navigates through about 5 example points, turning and finding its way to each point.
The tuning is a “first pass” attempt, seems to work, might wobble more than you like so you can fiddle with Kp, Ki and Kd (in that order, likely ignoring Kd).
Further, you may want to limit the amount of adjustment applied to steering. I’ve shown a simple example of limiting to 5 degree adjustments per frame, and ignoring small changes in steering.
The key to understand, however, is that orientation of the bot in this example is an angle in degrees compatible with the Mathf library in Unity, in which zero degrees is on the X axis, and positive angles of rotation are counterclockwise. Adjust to your needs.
/* A terse visual studio C# console application to
illustrate navigating a moving object that
can be steered using PID
*/
using System;
using System.Collections.Generic;
class Vector2 // for C# standalone, there's no Vector2, this is makeshift
{
public float x, y;
public Vector2( float ix, float iy ) { x = ix; y = iy; }
public float DistanceTo( Vector2 v )
{
float xd = v.x - x;
float yd = v.y - y;
return (float) Math.Sqrt( xd * xd + yd * yd );
}
}
class Bot
{
// represents an object, robot, with a position and angle or orientation in 2D
// target represents the waypoint position
readonly float Rad2Deg = 57.295828f;
readonly float Deg2Rad = 0.017453277f;
Vector2 target;
public Vector2 position = new Vector2( 0, 0 );
public float orientation = 0f; // will be degrees
public float speed = 1.0f; // assuming 1 meter per second velocity
public PID pid = new PID();
public void SetTarget( Vector2 v ) { target = v; pid.Reset(); }
public float DistanceTo( Vector2 v ) { return position.DistanceTo( v ); }
public float AngleToTarget()
{ if ( target != null )
{ float a = (float)Math.Atan2( target.y - position.y, target.x - position.x ) * Rad2Deg;
if ( a < 0f ) a += 360f; // wrap for non-negative angles
return a;
}
return 0f;
}
public void AdjustOrientation( float a )
{
if ( Math.Abs( a ) < 0.08f ) return;
if ( a > 5f ) a = 5f;
if ( a < -5f ) a = -5f;
orientation += a;
if ( orientation < 0f ) orientation += 360f;
if ( orientation > 360f ) orientation -= 360f;
}
public void Move( float t )
{ float a = orientation * Deg2Rad;
// fashion a temporary vector (just floats) of the distance
// this time sample will move, in the direction of orientation
float xdist = speed * t;
float vector_x = xdist * (float)Math.Cos( a );
float vector_y = xdist * (float)Math.Sin( a );
position.x += vector_x;
position.y += vector_y;
}
public void CheckAndAdjustOrientation( float deltaTime )
{
float att = AngleToTarget();
float err = att - orientation;
if ( err > 180f )
{
err -= 360f;
}
if ( err < -180f )
{
err += 360f;
}
float pout = pid.CalcPID( err, deltaTime );
AdjustOrientation( pout );
}
}
class PID
{
float Kp = .25f;
float Ki = 0.05f;
float Kd = 0.001f;
float error = 0f; // retained for debugging
float prev_error = 0f;
float integral = 0f;
float output = 0f;
public void Reset() { prev_error = 0f; integral = 0f; output = 0f; error = 0f; }
public float CalcPID( float err, float deltaTime )
{
if ( deltaTime > 0 )
{
error = err;
float Pout = Kp * error;
integral += ( error * deltaTime );
float Iout = Ki * integral;
float derivative = ( error - prev_error ) / deltaTime;
float Dout = Kd * derivative;
output = Pout + Iout + Dout;
prev_error = error;
}
return output;
}
}
class Program
{
List< Vector2 > waypoints = new List< Vector2 >();
Bot bot = new Bot();
// margin for accepting a waypoint has been touched
float waypointMargin = 0.025f;
// this isn't Unity, so this assumes every frame is 1/60th of a second
float deltaTime = 0.01667f;
void InitWayPoints()
{
waypoints.Add( new Vector2( -35, 5 ) );
waypoints.Add( new Vector2( 15, 5 ) );
waypoints.Add( new Vector2( -5, -15 ) );
waypoints.Add( new Vector2( -25, -25 ) );
waypoints.Add( new Vector2( 15, -15 ) );
}
void Initialize()
{
InitWayPoints();
bot.SetTarget( waypoints[ 0 ] );
}
bool PursueWaypoint( int n )
{
if ( bot.DistanceTo( waypoints[ n ] ) < waypointMargin )
{
return true;
}
bot.CheckAndAdjustOrientation( deltaTime );
bot.Move( deltaTime );
return false;
}
void Navigate()
{
int wp = 0;
while( wp < waypoints.Count )
{
if ( PursueWaypoint( wp ) )
{ ++wp;
if ( wp < waypoints.Count ) bot.SetTarget( waypoints[ wp ] );
}
if ( wp < waypoints.Count )
{
Console.Write( "Pos: " );
Console.Write( bot.position.x );
Console.Write( ", " );
Console.Write( bot.position.y );
Console.Write( " : Angle: " );
Console.Write( bot.orientation );
Console.Write( " : To Target: " );
Console.Write( bot.AngleToTarget() );
Console.Write( " : Distance To: ");
Console.Write( bot.DistanceTo( waypoints[ wp ] ) );
Console.Write( "
");
}
}
}
static void Main(string[] args)
{
Program p = new Program();
p.Initialize();
p.Navigate();
}
}
Steering in this example is rather direct. Among the kinds of adjustments you can make to suit would be things like:
A) if the adjustment is below a limit say 0.2 degrees, the skip the adjustment and let the bot drift off course a little bit (this could be -0.2 to 0.2 degrees)
B) clamp the adjustment coming from PID so the turn can’t be more then N degrees at a time sample
C) reduce or increase Kp (0.5 worked ok) or possibly decrease or increase Ki to smooth out and minimize the adjustments. PID tends to oscillate, and various posts on the 'net discuss how to tune the values. Options A and B can help reduce oscillations as well as fine tuning Kp and Ki (Kd tends to offer fast corrections to wide errors which is sometimes not desirable). I had Kp = .5, Ki = .01 and Kd = .001 work fairly well with steering adjustment limited to 2 degrees max. That had good curves through the waypoints with minimal overshoot.
The overall system functions even when perturbed. So, for example, you could “swerve” by intercepting the call to “AdjustOrientation” such that at some margin to the waypoint you could arbitrarily swerve left or right by adjusting the steering to as to anticipate the upcoming turn, then let PID take back over. This would cause the bot approaching a waypoint to consider that if the next waypoint is toward the right, the bot would swerve toward the left before reaching the waypoint, then return (turning right) on it’s own through the waypoint and then beyond. As it is written the bot passes “straight through” the waypoint, THEN begins its turn toward the next waypoint.