I noticed in the sample project there was a separate panel for each unique UI window. In my particular case, I’m going to have a lot of different windows stacked on each other which I won’t know until runtime. Originally I was thinking for having a single panel root and adding the “new” windows as children of that root. This would avoid having to create a new game object in the scene for each UI window.
I believe by not having a single UXML per panel, I’ll lose out on LiveUpdate, but is there anything else to be concerned about? (maybe in updates or rendering?)
I’m hoping to update the sample to show how to do that screen with a single UIDocument/GameObject. There are legitimate reasons to have multiple UIDocuments in the scene, like when you have floating health bars following many units on the map, but menu screens is not such an example. The goal of UI Toolkit was in fact to reduce the reliance on GameObjects and the Scene for UI definition and styling. So the more you can do in a single UIDocument, the better.
Now while I do mean a single UIDocument/GameObject, I don’t mean a single UXML asset and USS stylesheet. You can have a root UXML asset that you set on the UIDocument that then instances your various screens as their own separate UXML. Then at runtime, instead of enable/disabling GameObjects, you set:
menuScreenRoot.style.display = DisplayStyle.None; // to hide it
menuScreenRoot.style.display = DisplayStyle.Flex; // to show it
This is quite fast since the UI does not need restyling and relayouting (for the most part), so you can switch between screens quite fast.
If you can use custom C# VisualElements to drive the UI flow logic (anything that is pure UI logic, like screen transitions), another benefit of adding more into a single UXML is that you can test the entire UI workflow within the UI Builder. This means you can iterate on the UI logic entirely at edit time without constantly going into playmode. You can then focus on just the glue code and game logic going on the GameObjects.
I assume he means UXML hotswapping, which should be fairly trivial task? The only concern is some generic events (specific ones you need to retarget anyway) but you should be able to to catch them on the origin object with event system in a single event output.
We are in the process of updating our main Runtime demo (using the UnityRoyal project) to showcase the workflow I talk about.
Indeed, although I really just mean show/hide, not create/destroy. The idea is you would CloneTree all screens, have them all register all their events for buttons and controls, and simply control which subtree of the UI (so which) screen is currently visible using the display style property. Changing visibility doesn’t remove any bindings, nor does it incur relayout or repainting in most cases when switched back on.
Simple animations can be done via custom C# VisualElements, so they can also be iterated upon in the UI Builder. As such, there are no restrictions regarding single-UIDocument use for these kinds of animations. But same goes for animations driven from the GameObject, for when animating individual elements (which can be entire “screens”) inside the UXML hierarchy.
Since we don’t support world space UI yet, there’s not much need for GameObject transform animations driving the position of UI.
The example with health bars would indeed use multiple UIDocuments. Each unit prefab would include the UIDocument with the UXML/USS pre-assigned and set to be driven by the GameObject’s transform. However, assuming all UIDocuments use the same PanelSettings asset, there would still only be a single panel. The only cost would be the GameObject transforms being converted to UI screen space Absolute coordinates for the health bars.
If a single uidocument workflow is preferred, it would be useful to have some hide/show buttons in the Hierarchy section, so we can look at each panel in isolation while editing. Right now swapping between which panel is visible is quite tedious!
Note that we encourage using a single PanelSettings asset, but there are no actual restrictions in the amount of UIDocuments you can use. Damian updated the sample (found here) to use a single UIDocument but there’s not great harm at using multiple ones, except for the fact that that means more Game Objects in the scene, with all the already known caveats of that (although it’s probably a lot less Game Objects than standard UGUI UI).
But again, as Damian said, a single UIDocument would not mean a single UXML, which can be previewed individually on the UI Builder as well.
Hey
I have some questions about this workflow (everything in one UXML when possible, setting the style.display to show/hide groups and submenus) : is it still the recommended workflow ?
I was wondering what was best/preferable between setting the style.display property and style.visibility property (performance and usability-wise).
Currently, I am trying to give more power to designers with :
custom buttons to show/hide a specific VE
custom elements with animations when displayed :
But I have troubles with the “when displayed” part.
Here’s my code for a custom “LayerElement” :
protected override void ExecuteDefaultAction(EventBase evt)
{
// Do not forget to call the base function.
base.ExecuteDefaultAction(evt);
if (evt.eventTypeId == GeometryChangedEvent.TypeId())
{
if (this._previousDisplay != this.resolvedStyle.display)
{
_previousDisplay = this.resolvedStyle.display;
bool isAnimationRunning = _currentAnim?.isRunning ?? false;
if (this.ShouldAnimate && this.resolvedStyle.display == DisplayStyle.Flex && !isAnimationRunning)
{
this.transform.position = new Vector3(0, this.resolvedStyle.height);
_currentAnim = this.experimental.animation.Position(Vector3.zero, AnimDuration).Ease(UITKAnimationUtils.GetEasingFunc(this.Easing));
_currentAnim.onAnimationCompleted += ResetPosition;
}
else if (this.resolvedStyle.display == DisplayStyle.None)
{
if (isAnimationRunning)
_currentAnim.Stop();
}
}
}
}
private void ResetPosition()
{
_currentAnim.onAnimationCompleted -= ResetPosition;
this.transform.position = Vector3.zero;;
}
1. Is above code a good way to check for changes to the DisplayStyle ? (listening to GeometryChangedEvent and checking the resolvedStyle.display property) Or am I missing a better event/way ?
2. Is this approach (meant to be ?) usable to start and tweak animations from UI Builder ?
It works pretty well in preview mode with a button to show/hide the “LayerElement”, thanks to above code.
In non-preview, when tweaking the display style of the “LayerElement” in the inspector, the Builder is too slow to update (it freezes for a little more than half a second, so my 300ms animation is already finished by the time the VE is displayed inside the builder viewport ).
I came here since I am trying to figure out why my input events such as hover or click (which I subscribe to in C#) don’t work anymore once a ui document has been destroyed and created again.
It works fine if I use the default panel settings asset. But once I copy that one and use the copy on my ui document the event callbacks just don’t fire anymore. There seems to be something different once we have multiple panel settings assets. As if input is no longer detected.
Once I close the editor and reopen it then the events fire again. But when the ui document gets destroyed and recreated then it stops working again.