As the title suggests, I’m trying to reproduce the legacy system of Input.GetAxisRaw(“Mouse X”) and Y. I looked in the documentation, and it suggested the old Mouse Axis was calculated by taking the mouse Delta and multiplying by a Sensitivity (which I can see in the old Input manager).
When I try this with the New Input system, it generally works and gives me an equivalent of the old Mouse X and Mouse Y Axis, but it seems significantly less precise. To double check, I looked at the actual values it produces versus the old method, and it does generate significantly different values, even if I use the same Sensitivity (from the old Input Manager).
The net result is that mouse input, in this case for a FPS, is pretty subpar compared to the old input system. Because it’s less precise, each minimal movement of the mouse makes the camera move further. Sure, you can lower mouse sensitivity to compensate and make ever smaller gaps, but on the same “reasonable” sensitivity, it’s far less precise (hopefully that makes sense, in the same “sensitivity” for example to do a 180 in 3 inches of horizontal mouse movement, it’s far less precise).
Any idea what I’m doing wrong or why this would be?