UI Toolkit Hierarchy to Scene Hierarchy

UI Toolkit offers a lot of cool features, such as styling among multiple objects, layouting that actually works like it should, fields to set the render-order and much more.

BUT the worst thing is that it is much harder to work with it, compared to the uGUI Canvas System, because we can’t create Prefabs in a cool way (just “Templates”), we can’t attach components to our UI-Controls, we can’t reference anything in the UI properly as it is all just a black box.

How hard would it really be to provide the option to move the controls and panels (UI-Elements) to the Scene-Hierarchy? The UI-Builder has a hierarchy, an inspector, a preview-window - everything that’s also present in the Unity-Editor. It’s like an Editor within the Editor. So if there were wrappers over the UI-Elements (like “Button” or “VisualElement”), which would render the corresponding inspector to the unity-inspector-window (like all regular components do), we would be halfway there.
The only remaining thing would be to automatically generate a UXML based on the scene hierarchy and to update it whenever that hierarchy changes.

Now to the Feature Request: Please Provide this workflow. Add the wrappers, add the auto-UXML-generator, so we finally have the best of both world. Prefabs, References to UI-Controls, the ability to attach components to the UI, to manipulate their transforms with tweening, manipulate their enabled state by deactivating Components or GameObjects, an Update-Loop on indivial UI-Controls (through attached components) PLUS all the cool layouting, sorting order and multi-element-hierarchical-styling (UCSS) that’s already present in UI-Toolkit.

Additional Note: if there is an issue with rendering UI to SceneView, like it’s currently the case with uGUI - just don’t. Keep the Preview-Window and just render the UI inside there, it’s no issue to view it in another editor window than the scene-view, it’s just an issue if the UI isn’t part of the hierarchy. This Preview-Window then doesn’t need a hierarchy and inspector itself, it’s like a minimal Version of the UI-Builder that has just the “SceneView” (just the preview of the UI).

So you just want UI Toolkit to become uGUI again? God, please no. uGUI living in scenes/game objects is arguably the worst part of it. Solving that problem is half the reason why we got the UI Builder in the first place.

You don’t need to. You can make custom visual elements that encapsulate all their functionality inside.

We can in newer Unity versions.

You’re coming from the perspective of having used uGUI a lot and trying to apply all the same principles to it, when it’s a different workflow entirely that you need to learn to use. Once you get the hang of it you’ll be glad it isn’t just another uGUI.

2 Likes

For the Editor it’s amazing compared to the old IMGUI approach, I’ve used it there extensively. But ingame, I still use uGUI because … well I guess mostly because I can’t create a “Template” with a script attached that does a specific job. Like … A settings menu that can be opened from the main menu as well as ingame. Or a delayed health bar (where you see the damage taken for a second) that can then be used as mana bar and stamina bar as well - just with different settings on the script.

Writing VisualElements is … not super intuitive (or efficient). Creating prefabs (or templates) is intuitive. But you can’t create a custom control out of a Template, right? Is it even possible to reference scene objects from the UI (e.g. to enable a GameObject with another UI, a crafting menu or something)?
(haven’t checked out the newest version where the inspector of the UI-Builder got new reference fields)

I just don’t see any advantages of the whole UXML thing, aside from the UI not rendering as 1000m tall object in the scene. I only see the disadvantages (can’t attach components, can’t create prefabs with set references, can’t reference things in the usual way). UCSS is nice, the whole Visual Element API is also quite nice, with OnValueChanged, SetValueWithoutNotify - but the UXML and the custom editor within an editor makes no sense to me (outside of Editor-Window-Creation).

Uh, you know you can clone the visual element hierarchy of a visual tree asset? https://docs.unity3d.com/ScriptReference/UIElements.VisualTreeAsset.CloneTree.html

So yes you can make templates and instantiate them as needed. You can also just write custom visual elements that build themselves via code, and just new() these where needed.

Not sure what you’re doing wrong because I’ve done this multiple times without issue.

Also extremely easy to do. Just requires a custom visual element class.

It’s absurdly easy and only got easier with the recent attribute-driven approach that prevents tons of boiler plate.

Project level assets have never been able to reference scene level objects. This restriction is not unique to UI Toolkit at all.

Once again, you’re coming too much from the whole uGUI approach and haven’t fully learnt/embraced UI Toolkit’s very different approach.

1 Like

I second spiney199 here.

Actually, when I was getting into UI Toolkit I had more or less same problems with a different approach required to use it. Later on I understood and embraced that approach, but that requires slight mind shift from the previous uGUI paradigm.

Now, the only thing (well, aside from some obvious functionality not being implemented yet) is that I’m unable to view where UI element might be if it’s offscreen. I’d like UI Debugger to have something like viewport for the full UI Canvas.

We are always working on improving workflow and this is being taken into account. You can always suggest new ideas for consideration here : https://unity.com/roadmap/unity-platform/ui

With the new binding system I was able to more easily cross this barrier you are talking about:

  • You create a singleton scriptable asset that represent the data of the UI
  • By assigning that as the data source of the binding in the ui builder, the fields are reflected in Realtime in the builder. This helps a lot to speed up the development cycle
  • Have a reset method on your scriptable object
  • connect it to your scene objects at runtime.

This was surprisingly efficient for me.

I also saw team using uxml class to find the proper element hierarchy to call a method on with great success.

Thanks for the suggestion! You can use the bounding box in the UI Debugger to have an approximate idea but having it visually would be nicer.

1 Like

You could change the scale of the root to something line 0.1 in x and y or use the translate property in the meantime.

Yes, that’s the default way to use templates, isn’t it?

But can I create a template in the UI Builder and then create a custom visual element which does something with those elements? The workflow here seems a bit complicated.
Once I’ve built the visuals I want to work with, I’d have to code a custom visual element which I then instantiate in the Builder to set it as new parent to then make a template out of the whole thing.
To get more or less the same functionality as → create visuals with uGUI, add component that references childs, create prefab out of the whole thing.

Exactly, but when UI Elements are SceneObjects as well, instead of assets, this restriction doesn’t apply. The thing that “UI as Scene Objects” really enables is references between different Canvases / UI-Objects.

But … I really have to look into the new referencing-thing, maybe it solves a lot of problems. Last time I tried using it for a game (about 1-2 years ago) the lack of proper references was a really big issue. And the UI Sample that was published had tons of static events across all classes, which I consider a poor approach to connect things, as everything is tightly coupled. Change one class → change all the other classes as well because of all the references.

When I made the post, I just thought of all the benefits in terms of simplicity and how easy it is to connect things in uGUI and how the new UI could do the same with some simple wrappers and I thought of sharing this idea.

Soooo … there is still no other way than “find by string” and “find by type” to get a specific UI Element? This is from the documentation of 2023.3.
9456746--1328075--upload_2023-11-7_22-8-4.png

This seemed like there is a new way to reference things (UI controls), but according to the documentation, there isn’t (“Not possible”)? I’m confused.

How would I then contact the UI Control to open a menu, when I approach a machine ingame? The player goes to a terminal and presses “E” so a UI-panel opens and displays a text with some buttons like “Open Door” and “Move Crane” - without having references to those Buttons, how would this work in UI Toolkit? With uGUI I’d just open a specific Canvas with those two Buttons which I have references to, from my Terminal.cs Script and register callbacks to then Move the Crane or Open the Door - all in one script with less than 30 lines of code.

Now if I don’t want to search for those two buttons by name, how would this work?

Like I said, you can just new() visual elements via code too. This is my usual workflow as I’m more code-oriented.

You said “we can’t reference anything”, you didn’t really specify other visual elements. I assumed assets, which you can in newer versions.

I think you need to learn to take more data driven approaches as you have this all backwards. UI benefits the most from model-view approaches (which UI toolkit leans into heavily), where the UI is just… UI and is simply a visual display of underlying data (which includes objects that encapsulate behaviour). I’ve done this kind of stuff with UI toolkit easily already.

Interactable objects, when triggered, just pass along a data object to a component on the player. This instantiates a new button (likely a custom class derived from UIElements.Button) that injects the data object and displays it. The player hits the interact button, and the method that’s been registered to said object gets invoked, and the action happens.

No need to reference anything. Just purely generating things on the fly using a data driven approach. And funny thing is I used this same approach with uGUI too, as this approach is so much more flexible when it comes to UI.

Once again you are coming at this way too heavily from the approach of uGUI. UI Toolkit is not uGUI. Trying to make it uGUI isn’t going to make it any better.

By using this approach it’s indeed possible to sync global data back and forth between the UI and Game Systems.
PlayerHealthUIData_SO.Instance.playerHealth --Binding-> UI
or
InputField --Binding-> UIData_SO.Instance.playerNamePlayerInfoComponent

Binding requires a string … so if anything in the UIData_SO changes, the connections will break.
I guess we could use fake-convertes to add another string between our data and UI, so we can change variable names without running into issues.
On the other hand we could use ScriptableVariables (ScriptableObject Architecture) to have the Binding always point to “value” and we can drag and drop any SO, which we can rename, to link the value we want.

This kind of architecture is one that seems scaleable to me. Is it also possible to send a Button-Event to a SO that’s linked in the inspector of the VisualElement-Button?

I’m a bit worried about garbage when doing things this way. Using new() also means that at some point, I have to clean up things I don’t need anymore, or would you keep all UIs from all Terminals you interacted with, somewhere in the UI Hierarchy?

There is no Update-Loop in VisualElements, right? Also no enable/disable and no components that can be added to existing Behaviours (classes derived from VisualElement).
I’m using Update-Loops in my UI to listen for inputs or things happening in the GameWorld or the UI - and also for animations, for timers. If I still have to write components for such stuff, that means I need two classes instead of one. A Component in my Scene for the actual timer logic and a class Derived from VisualElement for the representation, right?

As I’m understanding this right now, it would mean that I have a component like “IngameMenuController” that just handels wether the IngameMenu should be visible or not - and then a VisualElement which receives that Info and shows/hides the IngameMenu accordingly. Same for the Settings-Menu, the Inventory, a Popup-Box, …
Isn’t that just boilerplate in a lot of cases?

And … what about sounds? With the ability to link SOs to VisualElements this should be solved, right?

They’re not Unity objects. They get garbage collected like normal once they’re removed from the visual heirarchy, and no longer referenced by anything.

Every visual element has a scheduler property that you can use to schedule actions to be called: https://docs.unity3d.com/ScriptReference/UIElements.IVisualElementScheduler.Execute.html

That said…

You can handle events in Visual elements in a more logical, call-back style manner: https://docs.unity3d.com/Manual/UIE-Navigation-Events.html

In any case your UI would have multiple components any way to properly encapsulate behaviour across your UI in IMGUI. Have one component script and some visual element scripts is no different.

Your uGUI UI’s would have this any way so nothing’s changed here.

Oh, I didn’t know about that.
It looks like it could make things a lot easier in many cases.

With uGUI I can just build an UI with all the buttons, then attach a new Script, add References which I can drag and drop and write all the logic I want inside this script. Wether it should be visible when the player holds down a button, if it should move to the left when some other UI gets enabled, transfer items from the player inventory to a chest inventory - everything the UI needs to do. And I only need one script per inGame-UI-Window to do the job (instead of two scripts, one deriving from MonoBehaviour, the other one from VisualElement).

With the update-loop tho, it might be possible to keep this simplistic style (we are talking about script with <100 lines of code in most cases).

In quite some cases you want to interrupt an animation in the UI and make it play backwards Open Menu → Close Menu before the open-animation was completed. With an Update-Loop that just lerps from A to B that super simple - no matter where it is, it will move towards the target.
I know UI Animations are also a thing now … but I don’t think they support sounds? And they are strange to setup currently (roadmap says they will get improvements).

Another example is waiting for multiple conditions → Is there still ammo in the magazine? Is the player currently aiming down sights? Is the reload animation playing right now?
Instead of setting up events for all those cases, it’s much easier to just expose these things as booleans and have them checked inside an update loop on the UI.

So basically, a lot of things have already been sorted. Visual Elements can have an update loop and they can reference assets, which can be dragged and dropped into the VisualElement inspector. I didn’t know about these two things. Also binding is a thing now and makes some cases easier (I’ve done all binding in update-loops in uGUI).

Now the only thing I’m really missing is: references from one VisualElement to another one, that can be set with drag&drop. (well and the workflow of creating the Visuals of the UI first, all the Buttons and everything and then just adding a component and writing the logic later … it is somewhat possible, by adding a parent, but still not as cool as just adding logic without changing the hierarchy)

Ew, that’s just bad programming. You should be splitting up responsibility. When I worked in uGUI, most UI elements had their own component to manage things. An inventory was at least a dozen component scripts if not more. Now it’s one component script an 11 visual elements. Same as normal.

Writing less code isn’t ‘better’ by the way. You seem to be too focused on having less scripts.

Animations are just transitions from one state to another. Again, UI toolkit has a separation of model and view. Separate this out. Your UI can just respond to changes to some data object it has a reference to.

Same as above. Separate this responsibility. UI Just displays information. Keep the information separate from the UI. Then you only need to manipulate the data and the UI responds accordingly.

I’ve said it like five times now, you’re just stuck in the ways of uGUI. Everything you did before you can do now, just that the process is different. Learn those processes rather than constantly pushing back against them. There is genuinely no point in UI Toolkit - after it’s spent all this time not being uGUI - to suddenly regress back to what it specifically meant to not be.

That’s what I am talking about. I just mean that I don’t want to prepare data for the UI but to just expose data however I want and UI handles how to use that data, without setting up specific events in MonoBehaviours just for the UI.
(and in the example, I’d just expose the three booleans and no events and the UI has to check the frame where all booleans are “true” → a reason to use the update loop inside the UI (the “schedule” thing you mentioned above))

Yes and thanks a lot for all your clarifications. I made a feature-request on the roadmap: Reference one VisualElement with drag and drop into the inspector of another VisualElement. I don’t see any reason why this shouldn’t be possible or why I have to name something and then reference it by Name … inside the very same UI - just auto-generate random IDs and let me reference those.

Less Code = Less Problems and less Work. It’s efficient as there’s also less lines to check while debugging and the whole thing is usually easier to understand. Aiming for minimum complexity while still keeping things modular (thus also extensible or “reducible”) and maintainable is way better than the opposite of boilerplate and over-engineering. If I had to choose between the two extremes, I’d always pick the over-minimalism one.

“Less code” from your examples here just sounds like you want one script rather than two, while the one script still does the same work as the two scripts. That’s not less code, you’ve just mashed responsibility together.

Having responsibility separated can equal the same amount of code, but with a clear delineation between what each one does. And often you can easily change, modify, and reuse this bits with little hassle.

Just yesterday I opted to remove the last bit of uGUI from my current project (was using it to have a pop-up menu that followed the player, but doing this in UI Toolkit turned out to be easier, honestly). The system is comprised of 8 scripts so far, and I was able to convert it to UI toolkit without changing any of the core functionality. Basically two of the scripts became visual elements rather than components, and changed a few of the API calls.

Otherwise the spaghetti and meatballs of the system remained completely unchanged. Would not have been this easy if it was just “one component”, I can guarantee you that.

In any case, we can go in circles about this, but I feel like you’re asking for a whole new UI system rather than a useful modification. UI Toolkit has many other aspects that need focus first.

1 Like