I’ve been watching the New Input System work with anticipation for a while, and decided to evaluate it for our use. I’m especially excited by the ‘Framerate Independant Input’ portion of the system, but I’m having trouble getting it to work as expected.
So, I write physics and timestep dependant code in FixedUpdate, and tend to run FixedUpdate at 100Hz for physics solver accuracy. The problem I’ve had with Unity’s input in the past has been that when the framerate drops (maybe there’s a spike, or the art hasn’t been optimised yet) the game feels very laggy in terms of control response, even though we’re still running FixedUpdate at 100Hz. This is because the gamepad input is only updated on Visual Frames, for Update() rather than for FixedUpdate(). So, I grabbed Unity 2018.2.1f1 and downloaded the New Input System Package.
In my tests, I use vSyncCount = 0 and targetFrameRate = 10 to simulate low framerate (10 FPS) and I implemented the ‘InputSystem.onEvent += eventPtr’ version, which should give me the most low level access to the inputs.
What I’d initially expected was something like this:
Where I can just ask the InputSystem for the input at the beginning of each FixedUpdate, and I get a newly polled value. However, because of the FixedUpdate frame pacing, the Fixed Updates won’t be spaced out nicely over the 0.1 second frame, they’ll be bunched together, so even though Time.fixedTime increments by 0.01 each call, I can’t rely on realtimeSinceStartup in this case.
In theory though, when I come to do my FixedUpdate simulation, I want to get the Gamepad Input State that was at the time in the frame that FixedUpdate would be being called if they were nicely spaced out. Effectively, convert FixedTime into realtimeSinceStartup, and then query that.
So, I started caching off the Gamepad Input whenever I got an InputSystem.onEvent, and the corresponding Time.realtimeSinceStartup, then in the normal Update, cache off the realtime, and equivalent fixedTime. Then inside my fixed update I can do this:
Which should tell me what realtime value I think the FixedUpdate ‘should’ have been called at if it were nicely spaced out.
However, when I tried to do this, the realtime stamps on my cache looked like this:
3.742432
3.742439
3.742445
3.842307
3.842318
3.842324
It would give a ‘burst’ of events very close together, and then pause for 0.1 seconds (one visual frame).
That lead me to conclude that the frame pacing is actually something more like this:
So the FixedUpdates are squished down one end of the frame, the InputEvents come in a quick burst, and the input polling rate turns out to be 60Hz, rather than 100Hz.
However, it was at this point that I realised that each input event has an associated eventPtr.Time on it, so surely I could use that. Indeed when I analysed the data, it comes out nicely paced, every 60th of a second (Ideally it’d be 100hz, but maybe this can be configured somewhere?). However, eventPtr.Time seems to be in a weird format that starts close to 80000 on my machine, and doesn’t seem to match with Time.realtimeSinceStartup or Time.fixedTime. This means even my caching and querying of old data doesn’t work, because I can’t work out when the end of the visual frame was in this new time format.
So, I see a few solutions to this:
Add an API to get the InputSystem’s current time (in the weird 80000 time format) at any point, so I can measure from there
Whenever the InputSystem polls the hardware, also store the time on that event according to the realtimeSinceStartup, so that gameplay code has something to compare it to.
Have an input caching system whereby you can ask the system ‘At RealtimeSinceStartup 0.56 seconds, what was the input state?’ Or even ‘What would the input be when Time.fixed time was 0.56, if FixedUpdate was paced evenly across realtime’.
The last one would be especially nice because it’d mean the system would support buffered inputs (was this button down in the last X seconds).
P.S. it would be very useful to be able to control the hardware poll rate, as I’d like for it to update as fast as possible to reduce latency (or at minimum equal the 100Hz fixed update).
P.S.S In response to line 25 on InputEvent.cs, in my opinion it’s probably worth keeping that m_Time as a double, or you might lose precision for finely-polled data during a soak-test (24 hours plus)
Please let me know if you can think of any workarounds or features that I’ve missed in the new InputSystem which might help.
Anyway, thanks for reading and keep up the great work!
So that’s a thing that needs a decision to be made. In the previous new input system, this was indeed how it worked. It looked at all the events, sorted them, and then time-sliced them across each individual fixed update slice.
Mainly for performance-reasons, the current system doesn’t do that ATM. So what ends up happening is that all events get processed into state on the first fixed update and then the remaining fixed updates see no changes.
So yup, that’d be the expected outcome ATM.
To explain a bit more, the native part of the system doesn’t do anything smart with the events. It just buffers up events and whenever it performs an update, it just flushes out the entire event buffer. And since the fixed updates are the first updates we run in the player loop, that’s the update that will usually end up seeing the majority of events pushed at it.
What the managed side does in turn is to simply process the stream in parallel into separate fixed and dynamic buffers (actually, because of edit mode, the picture is a little more complicated in practice).
Adding time-slicing for fixed updates there is totally doable. The one thing I want to avoid is adding cost that drives up processing time for events in general even if you don’t care about time-slicing of fixed updates.
Ok, that’s great to hear. Means the thing is working
60Hz is the default sampling frequency for async polls. The frequency is user-settable (see here) but ATM we’re lacking a public API for this. I’ll add this today.
We’re a bit at the mercy of OS thread scheduling there so in games that do heavy threading, could be we need to tweak this a bit. Maybe increase priority. We’ll have to see.
Yup, that’s indeed a problem that will have to be fixed. There’s a ticket here.
The timeline is in fact the same as realTimeSinceStartup. However, the Unity runtime will go and reset realTimeSinceStartup at various points. In the editor it happens over and over as you go in and out of play mode and in the player it happens once during startup (that’s the 8000 offset you’re seeing). Unfortunately, that’s a problem for the input system as it’d mean events would suddenly go back in time. Especially in the editor, given the system works in both edit and play mode, it’s a problem for the system to have its timeline shift around repeatedly.
But it’s solvable. Just needs doing.
Internally, that API exists (see here). Will likely get exposed as part of solving the realTimeSinceStartup inconsistency.
////EDIT: The current implementation of IInputRuntime.currentTime in NativeInputRuntime is incorrect (simply returns Time.realTimeSinceStartup). Will get fixed.
While not directly related, one thing I’d really love to have is input history at the user’s command. There’s many situations where asking questions involving history instead of just current state is useful. There’s two thoughts for APIs ATM.
InputStateHistory which would be set up manually, given a control (and devices themselves are controls) and history depth (e.g. “keep 2 seconds of history data”) and than it simply records state changes and keeps a history.
Being able to set buffering depth on individual devices or the whole system. Right now, the system double buffers. But there’s really no reason this number has to be fixed to 2. In this approach, you could simply set gamepads to have a buffering depth of, say, 100 and state updating would cycle through those 100 buffers.
Both approaches could actually be useful to pursue. InputStateHistory gives a little more control but custom buffering depths has the advantage of making history data for a device available everywhere.
Note that even with the pollingFrequency API exposed, it won’t apply to input across the board. On Windows, for example, XInput gamepads are the only thing we poll. The rest is picked up as events or, in the case of HID, is picked up at the speed of the source.
Thanks for the speedy and detailed response! I appreciate it
Cool. Supporting time-slicing would definitely be appreciated, and the more of these use cases that are supported by the InputSystem itself, the less implementation has to be done on the gameplay side (which is nice). Having time-slicing work with just a simple API switch would make it easy for users to get the benefits without having to do all that investigation I detailed above.
Fantastic! Thanks for adding this API - I grabbed the changes and it seems to be working as expected (after an editor reboot).
Good to know. Looks like this is the remaining piece of the puzzle before I can use the InputSystem as expected, so I have a couple questions:
Do you have any idea on a timescale for being able to fix this?
Are there any workarounds/local changes I can make in the meantime? (I only care about Gamepad input on Windows PCs at Runtime (Both Running in Editor and Builds))
Definitely, this would be incredibly useful to be able to say ‘was this button pressed in the last 0.2 seconds’. I wrote something similar recently which would also allow for querying of game state, so you could say ‘was the jump button pressed in the last 0.2 seconds, and were we OnGround in the last 0.2 seconds’ to allow forgiving jumps off of edges.
It should also solve the problem of polling faster than fixed update, and having to look back at all unserviced events to make sure you catch button presses.
Please keep me updated on these points, as we’re doing a code restructuring at the moment, and I’d love to include the new InputSystem as part of it, so long as we can get the Framerate Independant Input working (otherwise I’ll probably hack together a plugin myself )
@Rene-Damm It would be helpful to get some clarification on some of what you’ve described. See below for specifics. Note that my understanding of the order of events within a “processing frame” are outlined in the Execution Order of Events manual page.
My understanding here is that the Native-side queues up events over the course of a frame. That queue is then processed by the Managed-side once-per-frame. Currently, this is during the first call to [FixedUpdate](https://docs.unity3d.com/ScriptReference/MonoBehaviour.FixedUpdate.html). Is that the case?
What happens when there is noFixedUpdate call during the course of a frame? (As I understand it, this can happen when FixedUpdate runs at a slower frequency than the main frame loop… e.g. with VSync disabled or large Fixed Timesteps.) Is there a fallback to Update?
Also, are you doing something to ensure that this step occurs before any user scripts (see: execution order)?
What are “fixed and dynamic buffers”? What’s the difference? What does this imply for end users?
Is it safe to assume that timestamps will show the time at which the polling actually occurred, rather than when it was expected? Which clock is referenced to generate these timestamps?
This link is broken. Appears to contain the quoted text that appears above it in your post.
The contents of each buffer is the “full state of a control” (i.e. snapshot), correct? Is a new buffer generated per input change or per polling period or…?
What clocks are used to generate the timestamps for non-polled events?
Overall, I think I’d prefer not to have a separate switch but have it just be how fixed updates work. The system has an update mask (InputSystem.updateMask) that allows you to turn off entire updates which will also release all state allocated for those updates. So the ideal picture is for a game to decide whether to process input in fixed or dynamic update and then turn off unwanted updates.
I expect a change to address the timeline issue to land in a 2018.3 beta within about a month.
In the editor, you can query EditorApplication.timeSinceStartup. This should give you the current time on the exact same timeline as InputEvent.time. Unfortunately, this won’t help in the player.
Interesting use case and point. I’m more and more convinced that giving control over the buffering depth of each device opens up quite a lot of interesting avenues. I’ll have a look at this soon.
InputStateHistory is pretty straightforward to build with functionality that’s already in the system. That and the fact that it’s probably going to have less of an overall impact on usefulness make me think this can probably be pushed to be finished at a later point.
I’ll push the priority of these things up and will let you know when there’s something new.
Pretty much. To be 100% precise, input that isn’t polled but rather received as events from the platform is not necessarily queued up over the frame. Depends a bit on how it’s handled on the platform. For many platforms, those events will get flushed out into the input event queue once per loop before the first FixedUpdate.
Then the events will simply surface in the (dynamic) Update. You can also force this manually by just turning off fixed updates entirely through InputSystem.updateMask.
For either FixedUpdate or Update, input processing happens before executing any MonoBehaviour callbacks. It’s not dependent on script execution order. It’s possible to move it around but that requires using the scriptable player loop API.
For each device, the system keeps three separate sets of state: fixed update, dynamic update, and editor state (only in the editor). Basically, each of these states represents a specific view in time on the input device in question.
When processing events, the system processes into fixed and dynamic update buffers in parallel. This is to prevent having to send the same events both in fixed and dynamic updates.
To the user, all this is intended to be transparent. If you query devices in FixedUpdate, they should look correct. If you query devices in Update, they should look correct. The issue that @MatthewHarris111 raised does mess with that picture somewhat WRT FixedUpdate.
It’d be possible to optimize and combine state storage but that’d further complicate things. The intention here is for the user to eventually decide where to process input (fixed update or dynamic update) and then turn off the unneeded type which in turn also releases all memory associated with the update.
In general, yes, that’s safe to assume. It uses the same clock that realTimeSinceStartup uses. Where that goes is somewhat platform-dependent. On Windows, it goes to QueryPerformanceCounter (i.e. high-res CPU timestamps).
The buffers contain memory snapshots of entire devices. Internally, what happens is that when the system has its set of devices, it figures out how much memory it needs to store the state of all devices together (taking into account double-buffering but also the different update types) and then it allocates one chunk of unmanaged memory on the C++ heap and that’s where all the device state is stored.
Then events come in. These are simply either full or partial memory snapshots of individual devices. All the system does during event processing is basically to take these and copy them into that single internal chunk of memory. There’s a bunch of stuff around this (like the state change monitors that watch whether a particular memory region received a change) but the core of it is real simple.
So, dynamically, there’s no new buffers being generated. For each device, the system simply flips back and forth between a front and a back buffer (both of which are simply memory slices of that big internal chunk of memory). This is the thing that I want to open up such that instead of a fixed double buffer scheme, you are in control over how deep that buffering goes.
That largely depends on the platform. Wherever we get timestamps on events, we try to use those and convert them to the event timeline (i.e. real time since startup). How good those timestamps are varies. Windows message timestamps, for example, unfortunately have pretty poor resolution.
Wait, is the distinction here that some platforms actually have two “queues” that get shuffled together prior to the first call to FixedUpdate (or Update if FixedUpdate is disabled)?
That would make sense - I assume you’d be “caching” them native-side to do a single bulk-copy rather than one-at-a-time to save time crossing the native/managed boundary.
I would like to request that the eventual documentation include whether a given input for a given platform is polled or event-driven. This kind of clarity will help provide people with sensitive use-cases a better understanding of what to expect, especially when they start playing with the polling frequency. (E.g. “I changed polling frequency but nothing changed for my input! Halp!!1!”)
I… okay, that is awesome. I love that this is how it’s being implemented. I also wasn’t aware that the PlayerLoop APIs had actually made it to the Experimental stage. That is awesome.
(For the record, Koreographer provides an audio-based event system. We’ve had to resort to custom Script Execution Order settings for years to ensure consistent “early” updates. Exciting to see that we’ll be able to provide first-class support for having an “Audio Event Update” pass soon!)
Can you explain what the problem is with sending “the same events” to both Fixed and Dynamic update calls?
The only thing I can work out based on this is that if my InputSystem.updateMask specifies both Fixed and Dynamic, then each gets a copy of the event which it can independently manage/consume. I take it this path was chosen rather than designing a shared buffer that had separately managed buffer views, then?
Ahh, then if my previous interpretation is correct, this would be the main reason to not go the buffer-view route: simplify the code with the assumption that the vast majority of users will choose to process input in either one or the other of the update types.
Out of curiosity, if that is indeed the assumption, is there data to back it up? I suspect a very common approach for some physics-based game types would be to handle gameplay-specific input in FixedUpdate and then UI input handling in vanilla Update…
Is it possible to document these on a per-platform basis (once the documentation pass begins)? This type of information helps people set expectations when dealing with certain game types.
Got it. Thanks very much for the run-down!
Documenting (when it’s time!) what timing source is used for which input types, as well as what resolution to expect will really help people understand what to expect when working with the system.
…
As a bit of background, I’m the lead developer of Koreographer. We frequently get questions about whether Koreographer can help solve the “frame-independent input” problem that Unity has had, well, forever. I’ve had to suggest to users that a custom solution is typically required if they need specific timestamps for their input, but to “watch out for the new input system that Unity’s working on!” My hope is that we’ll be able to update our Rhythm Game Demo to make use of this new system once it’s ready for prime-time. That said, I’ve asked after documentation because many users will specifically ask for certain input types (e.g. “Android/iOS touch screen input timing” - perhaps unsurprisingly the most common request). Getting a better understanding of how the system works and the fidelity that can be expected will help people spend less time building and testing theories and provide them with more time to build and test actual gameplay
Yup, though not for this reason here. There’s a “foreground” and a “background” queue. The foreground queue is a simple byte buffer exclusively owned by the main thread. The background queue is a more complicated lockless thing meant to allow the main thread to flush the queue out without ever getting blocked by a producer thread.
Input that gets picked up as events is usually written directly into the main thread queue as on most platforms our UI processing happens on the main thread. The background queue is flushed into the foreground queue every time we run an update.
State data actually never leaves unmanaged memory so there’s actually no cost incurred when crossing the boundary. Both events and the device state kept by the C# system reside in unmanaged C++ heap memory. The only thing that the input system puts on the managed C# heap is the InputControl frontends that read out data from the C++ heap.
Agree. The docs will have to have some per-platform sections that detail what exactly is supported on each platform and how.
Heh yup, script execution order is a kludge at best.
It’d either require the native side to understand this or the managed side to make copies of the event buffers it gets. The first option isn’t desirable as the native side is meant to be super stupid. The less it knows about how things work on the C# side, the more flexibility that side has in changing things around. The second option isn’t desirable due to cost.
Also, there’s weird side effects. Right now, every single event that gets queued only appears on InputSystem.onEvent once. If we process events twice, we’d either be showing them twice or would have to work around it.
I’m probably making a mess of explaining this
Maybe it’s understood most easily by looking at what happens with an actual event.
Say the background polling thread picks up a change on an Xbox controller. It creates a state event and queues it. It gets timestamped with the current time and gets a unique ID. The system sees it’s not on the main thread and puts the event on the background event queue (if that one is full, it’ll block the polling thread).
Then the main thread enters the player loop and when it reaches either a fixed update or dynamic update section, it flushes the background queue into the foreground queue (appending all background events to the end of it) and sends the entire thing as a single memory block off to managed by invoking UnityEngineInternal.Input.NativeInputSystem.onUpdate.
The managed side (InputManager.OnUpdate) goes through each event in the buffer and processes it. This means first calling onEvent on it (if there is any callback installed), then seeing if it touches any memory protected by change monitors, and then finally copying the state memory contained in the event into both the device’s current fixed and dynamic update front buffers.
Essentially it means that the system may update a buffer for an update that hasn’t yet happened. For example, if we’re in a dynamic update, then the state we copy into the fixed update buffer is for the next upcoming fixed update.
The entire fixed vs dynamic (vs editor vs before-render) update logic is by far the most confusing aspect of the state system I think. I hope it’ll become simpler.
Yup, that.
IMO a setup where you process input in both fixed and dynamic update doesn’t necessarily make a whole lot of sense. I mean, you can do it and it’ll work but you’ll have to pay for it with extra processing and memory overhead. And I don’t think the system should optimize for it and become even more complex as a result.
In the past, you could only process input in dynamic updates. Now, you have the choice – including the choice of doing it in both if you’re okay with the cost (which, though, really isn’t terribly great; you pay maybe a couple hundred KB for the extra memory and then an additional memcpy per event and a few extra cycles here and there).
Could be. And I can’t really back up my assumption with data. The important thing IMO is that the system allows for that kind of setup. It’s not the optimal setup, but it’s a possible setup.
Yup. Documentation is something that is slowly turning into a pretty high-priority item.
Holy crap. So if you have multiple options set in your update mask the Input System will actually process multiple times within a frame? Specifically, the picture looks like this:
Versus this:
Where the assumption is that the ProcessingInput blocks are where the InputSystem does the following:
Shuffles events from input sources (polling and event-driven) into its own queue.
Calls onEvent to send a message to registered listeners.
These callbacks actually occur during the “ProcessInput” phase, rather than a “FixedUpdate” or “Update” phase (rather, directly before those phases).
“Squashes” down the event queue into a “current state” description and writes that into the “front buffer”, moving the previous “front buffer” to a “back buffer”.
Front/back buffers are effectively equivalent to “currentDeviceState” and “previousDeviceState”.
This assumes the following:
There is no way to process a List of raw events at any step. You only get the squashed “current” or “previous” device state.
We can create our own “List of raw events” by listening to the onEvent callbacks and managing our own List.
What is the purpose of the “BeforeRender” and “Editor” InputUpdateType options? It doesn’t look like the full processing you (or I, if I got it right?) occurs based on the InputUpdate class… One problem I could see with this is that your “front/back” buffers might update in a relatively confusing manner such that a user wouldn’t see anything.
If my understanding is correct here, then the input state could actually appear to change mid-frame: when the both fixed and dynamic masks are set, the state of input may be different in Update versus FixedUpdate. What is the benefit of allowing people to update at both steps?
Could you provide some of the expected use cases for this? I don’t currently understand why you’d be interested in taking this approach.
If the idea is that you pick one or the other and users will soon have the ability to adjust the PlayerLoop themselves, then why support this at all? Why not simplify the setup and allow the user to specify where the input update happens with the PlayerLoop controls?
I’m struggling to understand a situation wherein you would want to process a different controller state in FixedUpdate than in Update within the same frame. I do understand @MatthewHarris111 's desire to have those “events” get distributed throughout various FixedUpdate calls within the same frame, represented by actual timing. This would effectively be a “replay the events that occurred while the previous frame was being processed in such a way that the events hit at the time represented by the FixedUpdate’s fixedDelta slice”. In lieu of that, being able to maintain the event queue throughout the update phases and have the ability to request events “between times x and y” would resolve that issue (though with high frame rates you would, of course, need to provide a mechanism by which to control the overall size of the event history, rather than a “processed controller state history” to combat the frames wherein no FixedUpdate gets called but inputs do occur…).
Awesome. Can’t wait to give it a read-through once it’s ready!
Correct. The native side invariably pushes events out on each update guaranteeing that the script code running in the update has the freshest input state available.
Yup, correct.
There’s some changes I want to make regarding previous as in device state before the last event and previous as in device state at end of last update but ATM that’s how it’s set up.
Yup correct. ATM the list is kind of implicit in the call sequence of “onEvent”.
This, too, is something where I think there should be additional API to consume events in bulk (IIRC there’s a ////TODO already in the code). It’d be trivial to have a callback that gives you a new struct, let’s call it InputEventBuffer, which wraps around the internal raw memory pointer + event count. This API could easily allow you to set aside events, for example.
BeforeRender updates are a complication originating from XR devices. The problem they solve is that render cameras sync’d to tracking data have noticeable lag if the tracking data isn’t updated right before rendering. So, devices can flag themselves as needing before-render updates which then enables these updates (meaning that without XR devices present, those updates won’t happen by default). The updates are somewhat special in that they consume events only for the those devices and leave everything else untouched. Natively it’s implemented by simply not flushing the event buffer at all. Also, before render updates have no state of their own. They are considered part of dynamic updates.
Editor updates happen for the sake of EditorWindows. The new input system can be used in edit mode, too, so there’s a separate type of update fired in the editor. These get their own state and the system dynamically decides whether to route input to the player or the editor (something we still need to tweak and add more control over; there’s plans for giving a lot more control over both input collection and input processing WRT focus).
The front/back buffers for each update (and for each device) are independent of each other. When going into an update, global state is updated to switch to the buffers corresponding to the upcoming update.
Yes, dynamic updates may have a fresher view on input than fixed updates.
The two primary aims are latency reduction and simplicity. Each update gets the freshest view available and each update works exactly the same way.
Could be there’s an argument to be made for going entirely frame-to-frame with incoming input and running only the first update (whichever that one is) with events. Not sure. Happy to hear opinions So far, for me this has been falling under in the area of “only really matters if you mix fixed and dynamic updates”.
BTW I’m thinking of disabling fixed updates by default. So to process input in FixedUpdate, you’d first have to set updateMask. The advantage is that this would allow adding the time-slicing to fixed updates without having everyone incur the cost but at the same time make it so that if you enable fixed updates, the behavior is as expected (i.e. time-sliced processing). What do you guys think?
It’s used by input actions, for example. Through its bindings, an action monitors controls for change. Being able to observe state changes right when they happen means they see every single state change without being dependent on update frequency.
There’s several other use cases coming up. InputStateHistory will use it to know when it needs to copy out and record state. And I have some ideas around how to do virtual devices that synthesize their input based on other devices by also leveraging state monitors.
In the previous new input system we took an alternate approach of showing each event to every potentially interested party. Which wasn’t good. The more such interested parties you have, the more cycles are wasted looking at information that is of no interest (not to mention the time of dispatching the event to all those parties).
The state monitors solve this elegantly by making notifications context-specific. The system performs an efficient memcmp on the event and the current state and does so only if it has established that indeed, there is someone interested in the change – and then delivers the notification specifically to the listener for that monitor.
////EDIT: To illustrate how cool this is, the system can determine that in an incoming event, it was indeed bit #7 at byte #3 that has changed value and do so based on a one-bit state change monitor. And it can do so without ever trying to read an actual value from an InputControl (which involves virtual methods and a bunch of stuff). And it can notify just the InputActionMapState (without knowing about that class) owning the action that’s interested in changes in that button.
It’s an interesting idea. Have to think about that one.
One reason for the current design has been to give expected results with zero configuration (something that I’m already partially reconsidering for some specific features). I.e. user puts input code wherever in MonoBehaviour and the thing just works without any additional setup. Especially something like requiring a reconfiguration of the player loop is something considered relatively advanced.
Heh, welcome to update hell.
So, what you describe is possible and something along those lines is what the previous new input system did. You can process things strictly in sequence and basically process everything into a single set of state. And if you get it just right, it’ll work. But it won’t be pretty.
The thing is that every update represents its own time slice with its own definition of “before” and own definition of “after”. If you collapse it all into a single set of state, then you have to make sure that whichever view you take on that state, it comes up with the right answers according to those definitions.
So my take on this is… state memory is relatively cheap (a PS4 controller, for example, is 32 bytes so we’re storing 64 bytes extra). And (so far) I see no big incentive to optimize for a case that to me seems like the inferior setup. So, let’s keep it simple and instead of trying to collapse everything into a single set of state, let’s just pretend that every update is its own little world with its own state.
It makes a number of things a lot simpler. Want to have edit mode input sleep and only player input be active? Easy; just wipe the edit mode state buffers and don’t update them. (In the old system we had a bool flag in the InputControl code that when you read a control value and the flag was on, it would return a default value.) Want to get rid of an entire type of update? Easy; just release its separate state entirely and tell native to not run that update anymore.
Got it. I think it may be helpful in conversations to be specific when talking about “buffers” as either an “event queue” or a “state queue” or some such. I definitely had these two things mixed up in my mind for a while!
That could be pretty handy. Any particular reason that this wouldn’t be a callable API instead of a callback?
Got it. That makes a lot of sense! Thanks!
Woah. So the picture is actually more similar to this, then:
?
So if you process input in both fixed and dynamic inputs, then you get a completely distinct input history to deal with?
I get the latency thing, to a degree. But simplicity? It sounds as though it’s simpler to implement but that it will just as easily cause a lot of confusion… Perhaps with more documentation this might improve, but the fact that you are potentially dealing with different data in FixedUpdate vs Update [with certain setups] is non-obvious. That’s not how it works with today’s system, is it?
If you decide to ship with a system capable of this level of end-user complexity, then I would highly suggest setting the defaults to only update a single time per frame, and to do so at the top of the frame (as I understand it this would be the Fixed Update flag). The option to adjust which points the system updates should then be considered “Advanced Input System Configuration”.
What do you mean by “whichever that one is”? Is that in reference to the potential for updates to get reordered by the PlayerLoop?
I’m not sure it makes sense to me to update input state mid-frame. The most recent visual information that a player has is what was generated during the previous frame. Part of the wonder of timestamped information is that you have the ability to replay the input events that occurred while the previous frame was processing in received order and respond to them. This is potentially extremely important for smooth handling of controls in a physics-based game, right? If you are actually updating the input state and consuming them during the multiple calls to FixedUpdate(), then you may have something like this (assuming three FixedUpdate()s in a single frame):
FixedUpdate(time=0.51) - Process everything that occurred since the last FixedUpdate() of the previous frame, representing a realtime delta of ~20ms.
FixedUpdate(time=0.52) - Process everything since the previous FixedUpdate(), representing a realtime delta of 2ms.
FixedUpdate(time=0.53) - Process everything since the previous FixedUpdate(), representing a realtime delta of 1ms.
You end up with a really unbalanced set of “inputs-per-fixed-frame” ratios. It gets you nearly-realtime input some of the time (and, to some extent, future input, because the current frame isn’t even printed to the screen for the user to consume yet).
What makes the most sense to me and, I think, what @MatthewHarris111 is requesting here:
In theory though, when I come to do my FixedUpdate simulation, I want to get the Gamepad Input State that was at the time in the frame that FixedUpdate would be being called if they were nicely spaced out. Effectively, convert FixedTime into realtimeSinceStartup, and then query that.
is that the events generated during the previous frame (who’s time we’re actually consuming this frame, both in Update and FixedUpdates) be evenly distributed as though they were being “played back” over the time recorded.
If we’re all on the same page here, then the correct time to process/prepare incoming input would be at either the very beginning or very end of a frame. Then all update types will have access to the events (and, depending on processing [see bottom section of this post], states) that occurred during the time that was generated during the previous frame.
@MatthewHarris111 , please correct me if you had a different conception of what you’d like to do here…
Is that how Unity works today? Is the input state that you check in FixedUpdate different from that in Update when a change occurs between frames? (I.e. is the core input update on the old system actually processed between the Physics and Game Logic phases?) Whichever you choose, I would expect it to work as closely to the old system as possible by default.
So it’s a way to, for example, register for updates when “the X button state changes”? If so, A) that’s awesome and B) it could use a better name. If “any change” is reported by “onEvent”, then perhaps a specific state change could be reported by something like “onMaskedEvent”. A name like that tells me that there’s something to be registered and that I can specify a mask to specify.
I see in the repo that the StateChangeMonitor system is somewhat different from the basic “input event” system, so perhaps my thoughts here are non-sensical. If that’s the case, then perhaps the name “StateChangeMonitor” could simply become “StateChangeListener”?
Certainly. The default PlayerLoop setting would theoretically be whatever best maps to the way that Unity works today, right?
I think the key words in your statement there are “whichever view you take on that state”, though perhaps not in the form you intended. I’m going to outline a system here that, after reading over this thread a bit more, may be close to, if not exactly what is currently implemented. Please feel free to point out the differences between what I outline below and what’s been implemented. It would certainly help me get a better understanding of what’s going on under the hood and hopefully provide better feedback!
So, every update represents its own time slice with its own definition of before and after. But that’s basically where things end, right? You only have two main “update” streams (the others are similar enough), and they both deal with the same source material (input event stream and state). What’s more, they really only need to track a single datapoint: a pointer to the last-consumed-event.
It is only in the processing of an Update call that that we suddenly get the distinction of a time slice. Processing an update will look something like this (for all Update types):
Store last frame’s “nowTime” as “previousTime”.
Update “nowTime” to whatever we’re told by the system.
Subtract the two values to generate a “deltaTime”.
Neat. A time slice.
The only pieces we care about for event processing are the “last-consumed-event” I mentioned before and the “nowTime”. Assuming that we have a combined set of in-order events in a list (more on this later), processing Events for any given Update’s time slice (this assumes that you want to be able to register for update-type-specific callbacks and assumes that you could register a class to get the same event multiple times. I address this further below) looks like this:
Compare the last-consumed-event’s “next” event’s timestamp to the new nowTime.
If nowTime >= “next” event’s timestamp, process it. Iterate to next event.
Trigger any requisite callbacks.
Stop when there is no “next” event, of course.
Else, stop.
Once stopped, update state (or do this during the loop above).
Update last-consumed-event to point to the last event that was processed in the loop above.
That’s it. You can do this for any Update loop, whether it be Fixed or Dynamic or Editor [or Custom?], provided that you have a proper timing source that can get you your update-specific “nowTime”.
Effectively, you don’t need to store any specific state about the core combined Input Event buffer along with it. Rather, you would define an “UpdateView” that contains a pointer to the “last-consumed-event” for that type of Update (e.g. you could have an Editor UpdateView, a Fixed UpdateView, and a Dynamic UpdateView). During each update phase (not input update), they simply iterate through the list, updating the ‘last-consumed-event’ to efficiently point at the shared stream of Input Events (or Input Event Stream, IES).
The only “special logic” you would need to handle is the following: at the beginning of the process that takes raw events from the polling and native event systems and shuffles them into the IES, you would have some manager look at each UpdateView and determine the oldest “last-consumed-event” from amongst the group. You then adjust the IES’s ring buffer’s tail (I’m assuming that the underlying memory model here could be a ring or circular buffer) to point to that oldest last-consumed-event.
You will notice that using a shared Input Event Stream with a set of per-update-type views provides the following benefits:
Update-type-specific “onEvent” callbacks. (e.g. I can register one delegate to get called for events as processed in FixedUpdate and another to get the same calls for the dynamic Update.)
If this is undesirable and callbacks were all expected to happen once per frame, then running through these with a separate, internal “Event” UpdateView would be just as easy to implement. The logic I outlined above might look different, however. Whereas the standard set (Dynamic, Fixed, etc.) of UpdateViews would be reduced to “state updates”, this internal one would not adjust state and, instead, simply trigger the callbacks. Presumably this would happen directly after the main IES update where events from the previous frame (or period) are coalesced into a single list.
This might be desirable if, for instance, you wanted to handle input change individually within a FixedUpdate call but also ensure that it would match the state supported by that specific call.
Unified code for however-many-update types you want to support.
Any special state handling is moved away from the core input stream (IES) - state history could be managed by the UpdateView itself.
The system is “input update period” agnostic (and should be). You could run the Input Update process outlined above anytime in the PlayerLoop and it would only adjust the head/tail and contents of the IES. Event processing and state updates can be managed in an entirely “type”-specific manner, featuring 100% shared code.
The system completely handles situations like “FixedUpdate” not being called within the span of an “input update period” (typically a frame). The Fixed UpdateView’s state simply doesn’t update and its last-consumed-event most likely becomes the IES buffer’s tail.
Listeners could be tracked as part of UpdateView state.
Here are some illustrations to help show what this might look like:
This diagram shows events being generated in the Native Events and Polled Events system during frame 10. The ProcessInput process run at the head of every frame combines those into an ordered list and will add those to the head of the Input Event Stream’s buffer (not shown).
This diagram shows the processing in frame 11 of events generated during frame 10 by the “Fixed Update” UpdateView. The vertical bars along the “Fixed Update Processing” are equally spaced (based on the fixed time step). You will notice also that the timeline at the top represents the previous frame’s timeline. This is in recognition of the fact that every frame we “generate” time and then “consume” it in the next. This diagram (as well as the next) illustrates the consumption step.
The arrows emanating from the “FixedUpdate” blocks merely point to the vertical bars to represent that update’s “nowTime”. For each “FixedUpdate” block, the input state is updated by the events in the dashed boxes (those events that fall within its time period). If you so desired, you could have a Fixed-specific callback period for each FixedUpdate block, as outlined in text above (not shown in the diagram).
And here is the equivalent diagram for the “Dynamic Update”. Again, this is "consuming the events along the time generated by the previous frame. The “state” during the call to Update will have been updated by the events in the dashed box. It would be trivial to do a “dynamic update-specific onEvent callback step”, if so desired.
[Note 1: In the above images I used the bottom-right edge of the relevant update block to point to the “nowTime” simply as it was “furthest right”.]
[Note 2: I left any intricacies of tracking independent devices out of the equation for simplicity. I assume the core concepts outlined above could be extended to work with whatever model was deemed best (e.g. events/states are ‘tagged’ with a device, or perhaps there would be independent Input Event Streams for each device, or…).]
I have no idea if this is remotely similar to how things are actually implemented or not. If not, then perhaps it could be helpful? Or perhaps you could point out why this might not work? Regardless, I hope it’s constructive for the conversation.
I think that’s a bad idea! You summed up why pretty nicely yourself:
This is the correct instinct.
Handling input in Update and then applying it in FixedUpdate is a bit cumbersome. It’s also very bug-prone, and somewhat hard to reason about. Being able to check “was Jump pressed between the last FixedUpdate and this FixedUpdate” and not have to care about when FixedUpdates and Updates happen in relation to each other is a lot simpler than the current state of things.
As an example, here is jumping if you can check input in FixedUpdate.
At least, I’m pretty sure that it’s the correct way to do it. I had to think really hard about what happens if two Updates happen in a row, or two FixedUpdates, and I still wouldn’t bet much money that those 11 lines of code are correct. I also really want to throw up a comment there for why I need to set jumpPressed to false in FixedUpdate, so I don’t forget and my teammates understand it. It’s not very obvious unless you just thought about Fixed/Dynamic.
Also note that if you wrote the Update from the block above like this:
void Update() {
jumpPressed = JumpPressed();
}
It’d be wrong, and the player would sometimes - but only sometimes - jump double/triple height. That’s very easy to miss when reading the code, and it’s very hard to debug why that is happening.
The whole Fixed/Dynamic thing is also really hard to get for Unity beginners, or non-technical teams. I see a lot of people struggling with this on the forums. In essence, I think that defaulting input to “just work” in FixedUpdate makes the engine significantly less complex.
I think the burden of configuration here should be on the people that are doing complex enough things that looking at the same input event both in Update and FixedUpdate becomes relevant.
Unless the cost of this is something absurd like a hundred MB extra memory usage or 1 ms of processor work each frame on midrange mobiles or whatnot. But that seems very unlikely!
@MatthewHarris111 There’s a changeset on a branch with new API for handling timestamps here. The idea is that input stays on its strictly linear time since Unity startup but that we make the time offset applied to Time.realtimeSinceStartup available through the API. Unfortunately, it requires a native API change which will first land in a 2018.3 beta. However, if this works out I see no reason not to backport it to 2018.2, too.
A callable API would require that we keep the data around. The callback approach allows the system to not retain any data but rather leave it to the user to copy things to separate buffers, if so desired.
Potentially, yes.
Yes, today’s system has no support for fixed updates.
Yeah, things are somewhat leaning towards making fixed updates an opt-in feature (and there’s been repeatedly calls for killing it entirely). And how exactly they work is something that still seems to be solidifying. Not much in terms of learnings that can be applied based on the old system.
There’s no guarantee there’s a fixed update occurring in a loop. If the loop runs fast enough and fixed update frequency is low enough, the loop may outpace fixed updates and skip fixed updates intermittently.
Without fixed updates in the picture, I’d argue this is exactly what you want. The freshest set of input available at the time you process input. I don’t see any advantage to sampling input early in the loop, only to then spend some time on various kinds of unrelated processing while your input events are aging in the pipe. (Unfortunately, most event-based input paths actually end up giving us exactly this picture; one reason polling is actually quite attractive)
But yes, with fixed updates in the picture, the question indeed becomes, how do you want those to work? And that seems to be an open question. And having events pop up in the middle there may indeed be undesirable. I think this is all part of figuring out what “input in fixed updates” really means.
There’s no support for fixed updates in UnityEngine.Input. You get one view which is updated early in the loop and stays fixed for the duration of the loop. If you query in FixedUpdate(), you query the same data in each fixed update in the frame.
Yup, it’s not coupled to events as such. “Listener” might be better than “monitor”. I’ve added a note.
I think the idea of keeping all state in the event stream and basically having different sets of before&after pointers is an interesting one.
The problem it leads to is that device state becomes equivalent to event state. We have devices where it’s very desirable to send partial updates only (touchscreen; basically anything with massive state) or even to customize the process of integrating state into the device (touchscreen again; it manages touch allocations as storing touch in a state-based ways has certain complications/implications).
That makes it desirable to divorce device state from event state (also leads to a number of other advantages). Which you can totally do in your model. Just adds one extra layer.
So when you add that, you’re pretty much where the current system is. There’s a ‘current’ and ‘previous’ pointer per device and per update type. So that’s pretty much what you get in your model if you copied out the end results from the event stream.
Wait wait, this isn’t about taking the functionality away. Only whether the functionality is enabled by default or not.
So, if the decision ends up being to disable it by default, all it’d mean is that you do
InputSystem.updateMask |= InputUpdateType.Fixed; // (could use a nicer API, though)
and you’re back to having FixedUpdate support.
But yeah, not sure yet whether disabling it by default is a good idea. I, too, definitely would prefer it to be enabled by default to avoid any surprises. We’ll probably just have to see how expensive the fixed update processing is turning out to be in the end.
I didn’t think it was about taking the functionality away! What I’m trying to say is that the default you chose will end up being what most projects use, even if other choices might be better for that project. The API looks completely fine*, and shouldn’t be hard to use, but when you have a default, it will be assumed that the default is correct for most cases.
It’s important to remember that both you (and people like @SonicBloomEric who develop very time-sensitive software) are experts in the field of update ordering, so most people will have a lot less experience to draw on to make the choice of what to do with regards to the input mask.
I think that having FixedUpdate support by default will lead to an easier to use engine for most people getting into the engine, and opting out won’t be a problem for the people with the experience and knowledge to need it. I might be wrong!
No matter what you choose to have as default, you should consider adding a warning if people are checking input in FixedUpdate when that’s not enabled. Then people will get to know that they’re doing something wrong, unlike the situation right now where reading KeyDown/Up in FixedUpdate is always wrong, but you’re not really informed about that without really digging though the docs.
That warning might be annoying for people who Know What They Are Doing, though?
Consider making the player loop interface that easy to use? In particular, to get a PlayerLoopSystem to run in the Fixed timestep, it looks like you have to copy the intPtr from the built-in fixed update entry’s .loopConditionFunction to your own System. If I could just set the InputUpdateType, that would be much more comfortable, and a lot more readable.
Okay, I understand you now. Where you say things like “Yes, today’s system has no support for fixed updates”, by “fixed updates” you specifically mean “a re-evaluation of the input state during the [fixed] stage.”
Previously this was somewhat ambiguous: “no support for fixed updates” sounds an awful lot like “You couldn’t check Input in FixedUpdate using the old system”, or that it would somehow have stale data, rather than the same data sent to the dynamic Update phase. I understand now.
Let’s make a distinction here between Input Events and Input State. Some games only look at the most recent possible state when making their gameplay determination. In those cases, you could argue that it would be desirable to provide the latest and greatest Input State to those users and a polling system would be a phenomenal boon.
However, there are also games that are more time-sensitive than that, where frame-precision and timing is extremely important. These are the games that need the Input Events to have timestamps so that they can make important order-of-events and timing-based decisions. For these games, I would argue that you absolutely don’t want a mid-frame input state re-evaluation. Here’s why:
Frame-10 took 27ms to process. During that time, there were three separate input events.
Frame-11 begins with a deltaTime of 27ms, the amount of time between the the start of Frame-10 and the start of Frame-11. In Frame 11, our goal is to represent what happened in those 27ms.
There are two options to consider for when to run the “Input State Re-Evaluation” process and generate the Input Events and Input State for game logic to consume. Specifically:
Update at the beginning of Frame-11. We can use Input Event timestamps to determine where in the 27ms the three events occurred and render/score things consistently with sub-frame precision. The events we’re processing were generated over the period of time that we’re now processing this frame.
Update at the beginning of dynamic Update phase of Frame-11. Frame-11’s Update begins, say, 12ms after Frame-10 ended. The effective deltaTime between subsequent calls to Update (the deltaTime for the Input) won’t necessarily match the deltaTime of the overall frame. Part of the Input deltaTime represents time consumed by the processing of the previous frame, while the other part represents time from the future (in relation to the actual visual frame we’re building in this Frame-11).
The point I’m making above about differing deltaTimes is actually compounded when you deal with FixedUpdate because it can occur 0-to-n times per frame. The delta will be equivalent at each step, but the fact that you would still want sub-update precision on your events won’t change at all. If the events generated by the Input System deal with more or less time than what is driving the FixedUpdate step, then your simulation will suffer. To get the most-precise-physics simulation, you would want your control inputs to be distributed consistently across the simulation timeline.
What’s even better is that if you stick to this approach, then it becomes far, far more straightforward to provide each iteration of FixedUpdate with state specific to its slice of time, as well as (if possible) access to the list of events that were triggered by players during its slice of time.
Does this make sense?
Gotcha. I’d considered that to an extent. You mentioned at some point that you could represent the entire state of a controller in a tiny amount of space so why not just have a state-stream? Events, then, are just the differences between two subsequent states in the stream! In the end I left this out of the writeup because I figured there’s a whole lot of detail that you guys are handling for us behind the scenes (i.e. keyboard and touchscreen state is likely more complicated than a basic game controller or mouse) so mentioning it would do little more than detract from my main point. Overall, I totally get that there are complications/implications. Hopefully the discussion proves helpful in some way, regardless!
I think @Baste was under the same misconception that I was. When you say “[add this script] and you’re back to having FixedUpdate support”, the simplest way to interpret that for a current Unity user would be that you’re actually controlling access to Input System state during FixedUpdate with that flag. This interpretation is what would lead someone to suggest that you’d have to cache the state in Update to be read back in FixedUpdate as @Baste did with their pseudocode.
Perhaps it would be good to not refer to this as “FixedUpdate support” but rather some version/iteration of “Support for Input System Refresh in [insert-phase-here]”? That or you could call it “Physics Phase” and “Game Phase” (which is similar to how the Execution Order of Events manual page refers to them…
Suggestions
Overall, I would suggest the following:
Update the Input State/Event queues once per frame, by default.
This should be at the beginning of the frame.
This should be configurable insofar as someone can adjust it by customizing the [PlayerLoop](https://docs.unity3d.com/2018.1/Documentation/ScriptReference/Experimental.LowLevel.PlayerLoop.html).
Provide an API for users to call manually to trigger an Update of the Input State/Event queues at a time of their choosing.
Name it something obvious like InputSystem.ForceUpdate().
Document that it should be called as few times as possible.
Document how the API can be used with the PlayerLoop to trigger multiple updates, if so desired. (From the documentation for [PlayerLoop.SetPlayerLoop](https://docs.unity3d.com/2018.1/Documentation/ScriptReference/Experimental.LowLevel.PlayerLoop.SetPlayerLoop.html): “You can insert custom script entry points in the update order before setting it. For example, this allows you to add a script which runs right before physics or in other places where scripts are not run by default.”)
With the above setup, you will have a simple, consistent framework with the following benefits:
Works like Unity today, by default.
Enables users to customize Input System refresh timing to suit their specific needs.
The new Input System will be ready to work with the PlayerLoop system from the get-go.
Allows you to focus on optimizing a simpler system.
You mentioned this in the touch screen thread, and it got me thinking.
Isn’t this behaviour a bit insane? The editor behaviour is fine - it’s what I’d expect. The build behaviour seems bonkers, though. Time.realTimeSinceStartup shouldn’t reset at arbitrary points, that goes against the entire intention of the method! Is it a hack to get around some other issue? Or is there some idea that this is helpful?
Any chance that you’ll reconsider this behaviour internally? It’ll prevent you from having to make what you’re making now, which is a very confusing workaround to convert between Input time and realTimeSinceStartup. It’d also make realTimeSinceStartup better.
Also note that this isn’t documented anywhere. That’s pretty bad! If this behaviour stays in, the editor/build behaviour needs to be written down in Time.realtimeSinceStartup’s docs, no?
My main worry is creating defaults that are costly to have in terms of performance and costly to get rid of in a project once you’ve locked yourself into them. IMO Unity has a bit of a history presenting solutions as defaults to the user and then later teaching “best practices” that advise against the defaults.
Either way, I think once it’s clear how fixed update in the input system works exactly, this too will become clearer. And I agree, if it turns out that it’s not on by default, the system has to be telling you if you try to use it in fixed updates.
TBH not sure why it does that. In effect, it removes the engine startup cost from Time.realtimeSinceStartup but how that would be essential, I’m not sure. There’s quite a few things in Unity where today it’s not easy to reconstruct why they work the way they do. At the same time, changing them is often tricky and can inflict a great deal of pain on users.
We’ve opted to make the conversion implicit and expose time automatically adjusted for Time.realtimeSinceStartup. This hopefully gives the generally expected behavior. In the editor, in code that runs in edit mode, you may see timestamps jumping around occasionally (they’ll all stay consistent relative to each other but will offset as a whole) but in game code, you’ll probably never notice anything. The reset on realtimeSinceStartup happens before we run the first update and thus before we deliver the first input events.
Ah yup, that’s ambiguous. I’ll make sure we word that more precisely in the docs.
With regards to dynamic updates, IMO no With no fixed updates in the picture, I don’t see how this will make a practical, positive difference. I do see, however, how it would make a practical, negative difference by introducing extra lag.
And with flushing input right before dynamic update, you do time-slice things correctly between dynamic updates. I.e. you process all events that happens from the last dynamic update to the next dynamic update. It’s just that you put the frame markers for input as close as possible to the dynamic update instead of all the way to the beginning of the frame.
BUT… throwing fixed updates in there, I think I see your point and agree that it may be quite important to get a consistent view between fixed and dynamic updates with no new input being picked up in-between and with input frame markers being consistent between the two updates. I’d still go and fetch all input right up to the first fixed update – which is kind of how it works in Unity today; the input manager is updated in the early update phase which runs right before fixed updates but I counted 26 update functions that we run in the frame before we get to that point.
Does this actually happen at any time other than when you enter “Play In Editor”?
If you need a Time.realtimeSinceStartup that’s stable for Editor, why not reference [EditorApplication.timeSinceStartup](https://docs.unity3d.com/ScriptReference/EditorApplication-timeSinceStartup.html)?
As you have access to source code, can you profile the difference between “beginning of frame” and “just-before-FixedUpdate”? I have no idea how to quantify the impact that this extra lag may have (and no way to test it personally ). The most insight that we users have into how the core engine loop works today is outlined in the Execution Order documentation, which doesn’t even show where the Input system updates…
Sure, but “time-between-calls to Update≠ time-between frame renders (i.e. Time.deltaTime)”, right? With Unity today we have no way of knowing when the buttons were actually hit within a frame. Once we do get that information (timestamps) it may become important to process input more consistently with respect to time (as it definitely will with, say, rhythm games, and possibly also with certain physics simulations).
I guess this could be managed manually if we had access to a Time field that pinned the current frame’s Time along the realtime axis. As I understand it Time.time is affected by both Time.timeScaleandTime.maximumDeltaTime. From the looks of it, Time.unscaledTimemight do the trick, but if memory serves, it is also affected by the Time.maximumDeltaTime setting. If that’s the case, then perhaps a new Time.realTime property (or some such) would enable us to calculate these offsets ourselves? I.e. even if you do your update just before the [Physics or Game] update then we could compare the current frame’s “realTime” to the event timestamp to determine whether we should handle it this “Update” or next…?
Also, what of the suggestions I posted above? If you’re intent upon setting up the default update to be before the “Physics Phase” or the “Game Phase”, whichever-comes-first, then you could still accomplish that with the PlayerLoop-based approach, no? Just set it to run before “Physics” in the default loop and it’ll either update right before Physics, if there’s updates to process, or right before Game otherwise.
Sorry for the very long delay here. Having my head in the action stuff ATM (boy that feature still needs a lot of work) but a pass on the whole update handling is definitely still upcoming.
@MatthewHarris111 As of Unity 2018.2.5 and latest develop branch, timestamps should now work the way you initially expected (i.e. be on the same timeline as Time.realtimeSinceStartup).
Nope.
That’d give you different behavior when you test your game code in the editor as opposed to when you run it in the player. So you’d have to #if UNITY_EDITOR your game code and go to Time.realtimeSinceStartup in one case and EditorApplication.timeSinceStartup in the other. IMO that’s worse than the current solution where time offsetting is only visible to editor code.
That’s what @MatthewHarris111 was trying to do but ran into the problem with timestamps not being relatable to Time.realtimeSinceStartup. Should work fine now, though.
Hopefully, once things have settled around updates, this won’t be necessary.
The second part is already in there. Was considering to rename it from InputSystem.Update to InputSystem.ForceUpdate based on your suggestion but not fully decided yet. One thing I want to add to that is an explicit InputUpdateType.Manual where the system will never automatically update and leave it to the user to control input updates. Can sort of be done ATM by simply clearing the update mask but I’d like to have that as an explicit feature.
The first part is something that’s on the list to look at as part of revisiting update handling. TBH having trouble making up my mind before having a closer look and trying out stuff.
Multiple updates will remain in some form as we have before-render and editor updates as well, but the handling of fixed and dynamic as well as the precise relationship of the two WRT input is something that definitely needs looking at.