# managing floating point rounding

Hello. I am somewhat new to Unity. I have a 2D game (card game). In my code, I instantiate an object, and then use LERP to move the object to space on the screen. Before using LERP, I do some calculations to determine start and end potions. I have to have some degree of accuracy, though. If it helps, I am animating a playing card move from a players position, on a board, to a discard pile. I want to offset each card, so that as a new card is added, it does not completely hide the previous card. So, I add an offset of 0.5. But, between doing the calculations and the LERP function, floating point errors creep in. It doesn’t take long for these errors to accumulate enough that the cards stop looking clean. Any suggestions? Here is my code:

``````Vector3 offsetPos = CardObject.transform.position;

...

while (i < 1.0)
{
i += Time.deltaTime * rate;
CardObject.transform.position = Vector3.Lerp(startPos, offsetPos, i);
yield return null;
}
``````

EDIT:

Edit

I thought maybe better details would help. I have a event thats called, like this:

`````` public void PenalizePlayer()
{
for(int i=0; i < 2; i++)
{
if(PlayerCards.Count > 1)
{
//gets the number of cards in the discard pile.  I want to model a player discarding two cards, from the bottom of their deck, and putting them at the bottom of the discard pile.

..code to get values
//just an object that refers to the players hand, create clone
GameObject go = cardImages[0];
GameObject newCard = Instantiate(go, go.transform.position, Quaternion.identity);
//want the cards to lay under the top cards
int sortingId = -(i + numberCardsInDiscardPile);

//this should animate a newly instantiated object (the card) and lerp it to the potion of the players deck
StartCoroutine(Lerp(newCard, go.transform.position, discardPileScript.cardHolder.transform.position, gameMachineScript.timeBetweenCardsDelt, cardToPullFromDeck, true, sortingId));

.. other stuff
}

}
}
private IEnumerator Lerp(GameObject CardObject, Vector3 startPos, Vector3 endPos, float time, Card card, bool isPenalty, int sortingId = 0)
{
float i = 0;
float rate = 1.0f / time;
Vector3 offsetPos = CardObject.transform.position;

//To track positions of the cards,in the discard pile, I add them to an empty game object, in order.  this code gets the first gameobject in that parent, and its position.  then adds 0.5f.
float newOffset = discardPileScript.cardHolder.transform.GetChild(0).transform.position.x - 0.5f;
//this debug is where I see the issue.  all the game objects thats in the discard pile are in their right potions, per the inspector.   The first card has an x position of 0.5
Debug.Log(newOffset);
//when the debug prints out the new offset potion, of the new object,  the first item that gets added now has an x value of -1.117587e-08 and the second object has an x value of -0.355666 (in the inspector).  But, the debug log says -0.05000001 and -.505666

//The second time this code runs, now using the last added card as its base reference, the next card has, in the inspector, an x value of -0.855666 and the last card x = -0.2341224.  Though, in the debug, the values are a-1.055666 and -0.634.  It seems like by adding 0.5f gets me values that are pretty far off, in terms of floating point accuracy.  I assume its my code?

offsetPos.x = newOffset;

offsetPos.y = endPos.y;
offsetPos.z = endPos.z;

while (i < 1.0)
{
i += Time.deltaTime * rate;
CardObject.transform.position = Vector3.Lerp(startPos, offsetPos, i);
yield return null;
}
CardObject.transform.position = offsetPos;

CardObject.gameObject.transform.GetChild(0).GetComponent<SpriteRenderer>().sortingOrder = sortingId;

}
``````

Well, the main thing to consider is properly finalizing the position after the movement’s been made:

``````while(i < 1.0f)
{
// ...
}
// After the loop concludes, finalize the position to the exact intended location
cardObject.transform.position = offsetPos;
``````

Because your card movements were only updating as long as the interpolator was less than 1, they never actually &#42reached&#42 their destination. Additionally, multiplying by Time.deltaTime (as you should in this situation, don’t get me wrong) meant having an arbitrary margin of error in the final position without correcting it afterward.