Using Time.deltaTime to increase value gives constant values

Hello All.

So I am making a 3-click power/accuracy system that is often used in sports games especially golf.
The problem is I have a dial(See Below) along with a needle that has to move from 0° to 225° in a specific time frame(1.7 seconds)
On the way back down the user has to stop the needle as close to the yellow line to determine his accuracy in degrees.


To do this I did the following:
To move 225° in 1.7s, divide both sides by 1.7s
to get that in one second the needle has to rotate about 132­­°.
So to move the dial I have the following in my script:

    float Rotation = 0;
    void Update()
      Rotation -= Time.deltaTime * 132;

This works for rotating the needle 225° in ~1.7 seconds without a problem, however the Rotation value
is also used to determine accuracy when the player stops the needle by clicking.

And this value is returning consistent values because of time.deltatime.
To explain, say I get 60 fps, as far as I understand the value of time.deltaTime will be 1/60=0.0167
more or less.
Meaning the result of
Rotation -= Time.deltaTime * 132;
will more often than not be an increment of 2.2° per frame,
so mostly the resulting accuracy values of the player will be multiples of 2.2° and not
any value in between, which is not ideal.

So I am looking for a way I can achieve smaller increments(Ideally smaller than 1°) but keep the same time frame?
What I have thought of doing is splitting the
Rotation -= Time.deltaTime * 132;
statement into two(or more, using a for loop)
Rotation -= Time.deltaTime * 66;
statements. But I am not sure if this will have any effect.

One basic answer here is that game engines are indeed literally frame based.

Sure, you can look in to fixedDeltaTime and so on but it’s just “not a thing” on touch screen devices/gaming.

Imagine you were a film maker (think of old fashioned plastic film which runs at 24 fps). You are asking “how do I do such-and-such at a higher time resolution?” Well, of course you absolutely simply can’t.

Just purely FYI

in video games when exactly this happens, you just randomly add a small amount, i.e. you randomly pick a value inbetween the last and next frame time.

1 - it’s inconceivable you could get anything “more precise” than this, given the vagaries of touch-screen devices, OSs

(Note for example, say Apple themselves were offering, let’s say, a “scientific stopwatch” or something lke that - even them being able to program at the system level. It would be nonsensical on a touchscreen device.)

2 - it’s just a video game - nobody cares.

Note that this question is a many times duplicate.

For example here is one of the top-5 Unity scientific engineers explaining the situation

If you think of it as how many Update calls they have to tap in each area, I think your problem goes away.

If you’re getting 60 fps, that’s 60 Update calls in a second, at 17 milliseconds each. I think that’s way below the tolerance of human reaction time. Suppose they have to tap during an exact 10th of a second – that’s 6 frames. Jumps/drops in frame rate and exactly when they happen will change it some, but not by much.

I think that’s why input is only checked once/Update. 60FPS is fine-grained enough (it only draws that often – a bar jumping by 2 degrees looks smooth to us.)