Vector math not clear.

Vector3 RTM_Old = (tgt_Old_Pos - msl_Old_Pos).normalized;
Vector3 RTM_New = (tgt_Pos - msl_Pos).normalized;

LOS_DELTA = RTM_New - RTM_Old;//Every Frame LOS_DELTA is 0!
LOS_RATE = LOS_DELTA.magnitude;

As above describes is that the difference between RTM_Old and RTM_New is always 0.
(0.0, 0.5, 0.8) - (0.1, 0.5, 0.8) as an example would result that the delta is (0,0,0).
So how can i fix this issue or my mistake in the vector math.

Are you sure that LOS_DELTA is actually (0, 0, 0)? It seems like everything is working correctly but there is some confusion by the displayed (not actual) values being rounded.

If you are only looking at one decimal point for floats that appear to be 0.1 away from each other the actual difference could be anywhere between 0.0. and 0.2. For instance, if we can recreate your situation with the following code

Vector3 v1 = new Vector3(0.04f, 0.5f, 0.8f);
Vector3 v2 = new Vector3(0.06f, 0.5f, 0.8f);
Vector3 delta = v2 - v1;

Debug.Log("v1: " + v1 + "

"+
"v2: " + v2 + "
"+
"delta: " + delta + "
" +
“delta.x == 0.0: " + (delta.x==0.0f)+”
"+
"delta.x == 0.02: " + (delta.x==0.02f));

The printout is

v1: (0.0, 0.5, 0.8)
v2: (0.1, 0.5, 0.8)
delta: (0.0, 0.0, 0.0)
delta.x == 0.0: False
delta.x == 0.02: True

Which shows

  1. v1.x is printed out as 0.0 even though the actual value is 0.04
  2. v2.x is printed out as 0.1 even though the actual value is 0.06
  3. delta.x is printed out as 0.0 even though the actual value is 0.02