Hi. I’ll get right to the questions (there are two of them):
First Question
void Awake()
{
player = GameObject.FindGameObjectWithTag(Tags.player).transform;
relCameraPos = transform.position - player.position;
relCameraPosMag = relCameraPos.magnitude - 0.5f;
}
For the code that is written in bold:
We shorten the relCameraPos.magnitude by .5f. In the tutorial, they said that it was because the transform position is taken from the feet of the character, and we don’t want this.
Why don’t we want this? Is it because when we need to raycast to the player to see if there is a wall in the way we’d be raycasting all the way to his feet and this is bad?
Second Question
void SmoothLookAt()
{
// Create a vector from the camera towards the player.
Vector3 relPlayerPosition = player.position - transform.position;
// Create a rotation based on the relative position of the player being the forward vector.
Quaternion lookAtRotation = Quaternion.LookRotation(relPlayerPosition, Vector3.up);
// Lerp the camera’s rotation between it’s current rotation and the rotation that looks at the player.
transform.rotation = Quaternion.Lerp(transform.rotation, lookAtRotation, smooth * Time.deltaTime);
}
bool ViewingPosCheck(Vector3 checkPos)
{
RaycastHit hit;
// If a raycast from the check position to the player hits something…
if (Physics.Raycast(checkPos, player.position - checkPos, out hit, relCameraPosMag))
// … if it is not the player…
if (hit.transform != player)
// This position isn’t appropriate.
return false;
// If we haven’t hit anything or we’ve hit the player, this is an appropriate position.
newPos = checkPos;
return true;
}
In both of the bolded instances, why do we not simply use the player’s position? What benefit does taking a relative vector give?
Thank you for your time, and if this question has already been answered (I didn’t see it on Unity answers or the forums) please point me in the direction of that thread.