For anyone starting a new Unity project today, what is the recommendation from the official Unity team?
Go with UI Toolkit and don’t touch Unity UI?
Use Unity UI for some specific things and UI Toolkit for some other specific things? What scenarios would you recommend one or the other?
Or perhaps the recommendation is based on team size?
I’ve only used UI Toolkit a little bit but from what I’ve seen one of the big benefits is splitting structure-logic-visual so you can have a programmer, UI designer and artist all working on the same UI element without any conflicts. Based on that it seems like an excellent tool for big teams where each person does a different task, but what about a solo dev?
Also not unity staff, but having recently tried to implement a runtime UI in UI Toolkit, I don’t think it’s even close to ready for runtime use, especially if you care about being able to style it or use a game controller. I tried to implement the same basic settings menu in uGUI and UIToolkit, so it was an apples to apples comparison.
Hi CodeMonkey, that’s a great question, which is hard to answer concisely but I’ll do my best
In our UI comparaison page that DevDunk shared (thanks for that!), there’s a section for both Runtime and Editor that summaries the high level use cases where one or the other should be considered.
Of course the decision is often much more complex than that, but it should help you quickly assess if it’s a good fit or not. I’d say in most cases, if you have hard requirements such as having UI rendered in world space, or need full customization over the shader, then go with UGUI, otherwise UI Toolkit should probably work best for you.
The example you give is a good one. Collaboration should be much smoother, especially at scale, and authoring workflows are generally more familiar and accessible to artists and designers. Another good use case is if your game requires a lot of complexity UI, containing lists and tables of data, often found in sports simulation games or builders, you’ll have a much easier time with UI Toolkit.
And you can use both UGUI and UI Toolkit in the same project, sharing Sprites and Font Assets, so you could see a use case where menus are done using UI Toolkit and HUD containing advanced visual styles made with UGUI.
I don’t think it’s even close to ready for runtime use
I don’t agree. It depends on your needs. If all you need is a basic UI with some interaction, UIToolkit can do it.
However, I don’t see how UIToolkit would integrate with the physics system and other components. If you need this (physics, colliders) or some of its unfinished features (e.g. shaders, see the linked comparison above), you should stay with uGUI.
if you care about being able to style it
The stylesheets are exactly for this. Changing style is a matter of loading different stylesheets.
You can even use variables in stylesheets and only overwrite these to apply different colors, sizes, etc.
or use a game controller
From my experience, the default controller navigation is just as bad as it was with uGUI.
I agree, you need to implement a better default controller navigation.
I came up with my own solution, but it isn’t great either.
I think, default controller navigation needs to consider the hierarchy of the elements.
Further, one could use USS classes or attributes in UXML to opt-in / opt-out as a navigation target.
Next / previous VisualElement when navigating in a certain direction could also be specified as attribute, e.g. next-component-up=“#the-component-id”
I guess another thing to consider is third-party plugins/utilities, shared knowledge, etc. I haven’t used UI Toolkit apart from some simple tests a long time ago, but I assume that for now there are more resources for uGUI, right?
Yes that’s a very good point. For a newcomer, it will be easier to learn UGUI based on existing learning material, samples, Asset Store extensions and overall community having a good understanding of the feature set.
Hi, I just watched a tutorial, and UI Toolkits seem similar to other UI tools in different software. I love UGUI because it allows me to create non-standard elements easily and quickly. Additionally, I can create transition animations or idle animations, and I can utilize any Unity3D feature within the UI. Although I haven’t used UI Toolkits, I believe that all the features I described might be challenging to use. Furthermore, UI Toolkits offer many style features (properties) that I don’t actually need and won’t use. Therefore, in my opinion, if you don’t require the same UI template for your next 20 games, stick with UGUI.
I’ve found it to be the exact opposite: UI Toolkit is lacking an absolute ton of stuff that I would need on a regular basis, and that’s on top of it being low performance once you start going creating complex UIs. With UGUI I can pretty much always make it work.
The main selling point for UI Toolkit in my opinion is the ability for animations using pseudo-classes like hover, which is a pretty basic CSS pseudo-class even for beginners. Compared to the old Unity UI/Canvas style buttons which kind of has a hover state, the only thing that changes is the color when you hover over it, whereas you can change the spacing, font size, scale, background color, text shadow, etc on a hover event using the UI Toolkit, so animating UI elements is way nicer in this regard.
Another thing is the border property. If you’re aiming to make a more image based UI this is not really applicable, but with the Canvas approach, I had to use a png image of a border outline which 1. Is not vector based so it doesn’t scale well and 2. If I want to change the outline I would have to make another outline png image. With the UI Toolkit’s border property however, I can make dynamic borders, choose with edges to render and which corners to round with a specific radius value, and how thick the border is, and it’s vector based, so it scales nicely on any screen resolution.
Currently, I have my Pause and HUD UIs use the UI Toolkit, but my grid based inventory UI system uses the old UI system, because I had the drag and drop functionality already implemented on there.
My suggestion: For UIs that does not require a lot of interactivity or just needs to display game state data (like a HUD), use UI Toolkit, but for UIs that needs a lot of user interactivity, especially when it needs to pass data from one UI element to another (inventory UIs), use Unity UI.
I think with runtime bindings now available for UI Toolkit this is less so the case. Alongside taking a model-view approach with your data and UI (which you should generally do anyway), I think this sort of thing is probably easier, or at least, more manageable in UI Toolkit now.
I know with the last bit of UI I did, and probably the most complicated bit of UI I’ve done to date - which was a UI for refining items (but not in your typical Minecraft-y sorta way) - it would’ve been substantially more difficult without the use of Runtime bindings.
The way runtime bindings works is actually IMO, one of its biggest drawbacks. You have to get VisualElement components by querying string references which I find unintuitive and very prone to errors. The old system could have UI elements like buttons or sliders assigned as GameObjects, but having a string reference system means that one small typo could cause a failure, and having multiple VisualElements that needs a runtime binding requires increasingly convoluted strings so as to not conflict with an existing string reference (and more convoluted strings means more chances of a typo).
Hopefully Unity thinks of a more object based approach where each VisualElement could be referenced by object, rather than by string.
Edit: I hold references to certain VisualElements and use that to query for other VisualElements rather than always from the rootVisualElement, which is a way to circumvent duplicate string references.
I mean you can make custom visual element types, and then Q<T> with your custom type. I often have a custom root element that handles the overarching UI, then I can just pass a data object to it, and let the visual elements handle the binding on their own.
Again, model view approach. Just have a blob of data, and grab that data and throw it into your UI to handle. UI should be a top-down approach anyway, rather than the other way around. I keep UI code in their own assembly definitions, too, to reinforce this.
I pretty sure I have zero, or next to zero string queries in my current project.
To be honest I think the biggest reason folks have issues with UI Toolkit is they approach it like they would uGUI. It’s not uGUI and the same approaches either won’t work, or won’t work well. You really need to change the way you think about and approach your UI when using UI Toolkit.
This is missing from UGUI but I now use an asset store add-on which offers essentially the same thing - you can make borders and backgrounds with any radius corners and draw by shader, not with images.