UI development and implementation in Unity

Hi everyone,

Welcome to our third article in the UI Toolkit sample - Dragon Crashers (2022 LTS) where we cover UI development and implementation in Unity.

User interface design is a critical part of any game. A solid graphical user interface (GUI) is an extension of a game’s visual identity. Modern audiences expect games to have refined, intuitive GUIs that seamlessly integrate with the gameplay. Whether it’s displaying a character’s vital statistics or the game world’s economy, the interface is your player’s gateway to key information.


Download User interface design and implementation in Unity for free.

This page explains the basics of UI development and implementation in Unity. It’s based on a chapter in the e-book, User interface design and implementation in Unity, where artists, designers, and developers can find tips and best practices for building sophisticated interfaces with Unity’s two UI systems, Unity UI and UI Toolkit.

UI asset preparation

As a UI artist, you will spend the bulk of your production time, preparing and integrating visual elements into your game, and polishing final artwork to follow your target specifications. All assets need to adhere to guidelines for pixel resolution, texture budget, and file organization.


An image from the UI Toolkit - Dragon Crashers sample in the Editor.

Implementing the UI in Unity can vary from team to team. It can be a more specialized task on larger teams and require dedicated UI developers, or directly involve UI artists within the process on smaller teams, as discussed in the following sections. Be sure to agree on production standards as a team for the asset specifications, naming conventions, and file paths.

Several Unity tools can streamline the process of moving sprites from DCC tools to Unity and prepare them to be used in the game:

  • Sprite Editor: An essential 2D tool to edit sprites; 9-slice an image, change the pivot point, make the image tileable, and much more
  • PSD Importer: Import your layered PSD files directly into Unity and use the layers or groups as individual sprites. It saves a lot of round tripping time for UI artists because they don’t have to export layers and re-import in case of modifications; edit and save changes in the PSD file and Unity will reflect the changes.
  • AssetPostProcessor: In a production involving thousands of assets, avoid relying on manually configuring the specifications of each asset from the Inspector. To automate the process of validating Asset files, use the AssetPostProcessor API. This helps you hook into the import pipeline and run scripts prior to, or after, importing assets; just add assets for the scripts to make the necessary changes. You can change the settings from a single script that will automatically reapply the changes to the assets in the project.
  • Presets: These are assets that you can use to save and apply identical property settings across multiple components or assets. You can also use Presets to specify default settings for new components and default import settings for assets in the Preset Manager.
  • Sprite Atlas: This enables you to pack sprites in a single texture called an atlas, thereby reducing draw calls and texture management work. As a UI artist you no longer need to carefully arrange sprites in a single texture.
  • Simulator view: This is available via the Game View drop menu. It simulates different devices in the Simulator view, so you can quickly previsualize how the UI would respond to different screen configurations.

Two UI systems: Unity UI and UI Toolkit

In Unity 2022 LTS, there are two systems for creating runtime or game UI:

  • Unity UI: Released with Unity 4.6 (late 2014), Unity UI, also known as UGUI, is a GameObject-based UI system, wherein the design of the interface takes place in the Scene view.
  • UI Toolkit: A new UI system that can be used alongside Unity UI, and that’s inspired by website design workflows; the design of the interface takes place in UI Builder.

Both systems have a WYSIWYG approach for UI artists, with some key differences.

UI Toolkit and Unity UI can work simultaneously in the same project. We recommend testing the UI Toolkit progressively to get your UI artists more familiar with it. It might have a steeper learning curve if the team is not familiar with web-based design and layouts. But it provides benefits like scalability, performance, and decoupling of visuals and code that will help many teams.

UI Toolkit is currently used in the UI of the Unity Editor itself, and a number of Unity packages and features have interfaces made with it as well. Runtime support is more recent, but it’s already being used for in-game UI, such as for Timberborn by Mechanistry or Just Dance 2023 by Ubisoft.

See the following short chart that compares the two Unity UI systems or check out a more in-depth comparison in the documentation.

Designing with Unity UI: Canvas

The building block: Canvas

The Canvas area is depicted as a rectangle in the Scene view. UI elements in the Canvas are drawn in the order they appear in the Hierarchy. The child element at the top renders first, the second child next, and so on. Drag the elements in the Hierarchy to reorder them.


The Canvas area in Unity UI

Canvases can render using three different modes:

  • Screen Space – Overlay: This rendering mode overlays the UI on top of everything in the 3D scene. No 3D object will be rendered in front of the UI, regardless of its placement. The Canvas has the same size as the Game view resolution and automatically changes to match the screen. Post-processing does not affect the UI.
  • Screen Space – Camera: This is similar to Screen Space – Overlay, but in this mode, the Canvas appears at a given distance in front of a specified Camera component. The Camera’s settings (Perspective, Field of View, etc.) affect the appearance of the UI. The UI appears on a 3D plane in front of the Camera defined by the plane distance. The GameObjects can be behind or in front of the Canvas, depending on their 3D positions. This Canvas also has the same size as the Game view resolution.
  • World Space: In this render mode, the Canvas behaves like a GameObject in the scene. The size of the Canvas can be set manually using its Rect Transform. UI elements will render behind or in front of other objects based on the 3D placement of the Canvas. This is useful for UIs that are meant to be a part of the game world.

When first creating a Canvas, the Render Mode setting defaults to Screen Space – Overlay, which appears very large in the Scene view compared to other GameObjects. One Unity unit (typically one meter) represents one pixel, so creating a UI at HD resolution, for example, makes the Canvas 1920 x 1080 Unity units in the Scene view.

If you’re having trouble working on objects of vastly different scale, use the Frame Selected tool. Double-click a different GameObject in the Hierarchy view or press the F shortcut in any view to fit and focus the view to the selected object.

Tip: Optimization

If you have one large Canvas with thousands of elements, updating a single UI element forces the whole Canvas to update. This can potentially consume CPU resources and hurt performance.

Take advantage of Unity UI’s ability to support multiple Canvases and divide UI elements based on how frequently they need to be refreshed. Keep static UI elements on a separate Canvas, and dynamic elements that update at the same time on smaller sub-canvases.

Designing with Unity UI: Prebuilt UI elements

Layout and prebuilt UI elements

The different elements in the Canvas use the Rect Transform component instead of the regular Transform component. The Rect Transform component is a rectangle that can contain a UI element.

Rect Transforms include a layout concept called Anchors. Anchors appear as four small triangular handles in the Scene view, with additional information available in the Inspector.

If the parent of a Rect Transform is also a Rect Transform, you can anchor the child Rect Transform to it in various ways. Either attach the child to the sides or middle of the parent, or stretch it to the dimensions of the parent. See more examples in the documentation.

Here are some basic layout elements (each one is indicated by a letter in the image above):

  1. Predefined UI GameObjects include text labels, buttons, sliders, toggles, drop-down lists, text fields, scroll bars, panels, and scroll views. Elements can consist of several child objects, each with a descriptive name and Image component attached to modify their appearance.
  2. Rect Transform gizmos can help you align elements, fix UI elements to reference points in the Canvas (Anchor Presets), or stretch and scale them.
  3. The predefined elements automatically include their required components. For instance, a Button adds an Image component, a child GameObject with the text label, and a Button component.
  4. Unity Events are included with interactive components to trigger a function. The triggered action can be a method from a script, Transform, or GameObject. For example, a Button includes an OnClick Event and a Slider includes an OnValueChanged Event, both of which can execute logic and create user interaction.

Designing with Unity UI: Prefabs

When you create a new UI GameObject in the Scene view, it only belongs to that scene. You can duplicate the object, but making changes later is a manual process for every instance. Repeating elements when making UIs can therefore be quite inefficient. Instead, use Unity’s Prefab system. It equips you to modify many duplicate instances at once.

Unity’s Prefab system enables you to create, configure, and store a GameObject – with all of its components, properties, and child GameObjects – as a reusable Asset. The Prefab Asset acts as a template from which you can create new prefab instances. These assets can be shared between scenes or other projects without having to be reconfigured.

To work with them, drag a regular GameObject from your Scene view into the Project view. Then create a prefab with the prompt and select Original Prefab for any new prefab or Prefab Variant (if you’re only overriding the values of an existing prefab). Its icon will change to a blue box in the Hierarchy view.

Designing with UI Toolkit: Intro to flexbox layouts

UI Toolkit positions visual elements based on Yoga, an HTML/CSS Layout engine that implements a subset of Flexbox. If you’re unfamiliar with Yoga and Flexbox, make sure to check the e-book for a more detailed intro.

When defining child visual elements, the UI Builder offers two position options:

  • Relative positioning: This is the default setting for new visual elements. Child elements follow the Flexbox rules of the parent container. For example, if the parent element’s Direction is set to Row, child visual elements arrange themselves from left to right.
  • Absolute positioning: Here, the position of the visual element anchors to the parent container, similar to how Unity UI works with Canvases. Rules like Margins or Maximum Size still apply, but it does not follow the Flexbox rules from the parent. The element overlays on top of the parent container.

Remember that visual elements are simply containers. By default, they don’t take up any space unless they are filled with other child elements that already have a specific size, or you set them to a particular Width and Height. The Width and Height fields define the size of the element. The Max Width and Max Height limit how much it can expand. Likewise, the Min Width and Min Height limit how much it can contract. These impact how the Flex settings can resize the elements based on available space.

The Flex settings can affect your element’s size when using Relative positioning. It’s recommended that you experiment with elements to understand their behavior firsthand.

Basis refers to the default Width and Height of the item before any Grow or Shrink ratio operation occurs:

  • If Grow is set to 1, this element will take all the available vertical or horizontal space in the parent element.
  • If Grow is set to 0, the element does not grow beyond its current Basis (or size).
  • If Shrink is set to 1, the element will shrink as much as required to fit in the parent element’s available space.
  • If Shrink is set to 0, the element will not shrink and will overflow if necessary.

It’s recommended to practice creating layouts and see how the different screen sizes in the Simulator View display the UI. Once you get the basic workings of these layouts you can explore the different options to organize other elements inside the Visual Element like Direction, Wrap, Justification, Alignment, Margin, Border, Padding and so on.

Designing with UI Toolkit: Styling with UI Builder

Adding style to visual elements is preferably done via USS files (Assets > Create > UI Toolkit > StyleSheet). They are the Unity equivalent to web CSS files, and use the same rule-based format. They also add flexibility to the design process.

USS files can define the size, color, font, line spacing, borders, and location of elements. Proper styling can render crisp, clean visual elements, reducing the need for so many textures.

The UI Builder interface allows artists and designers to visualize the UI as it’s being built.

In the top-left Style Sheets pane, add a Style Sheet to the current UI Document (UXML) with the “+” drop-down menu to the left of the Add new selector… field. Modify the appearance of the visual elements by adding one or more USS files.

Style Sheets can share and apply styles across many elements and UI Documents (UXML). They do this via USS Selectors. You can add a new Selector in the field above in the Style Sheets.

Selectors query the visual tree for any elements that match a given search criteria. UI Toolkit then applies style changes to all matching elements.

USS Selectors can match elements by:

  • C# class name: These Selectors work by Type (Button, Label, Scroller, etc.) The Selector matches the available Type names in the Library pane without any special characters. Class Selectors appear in white.
  • Assigned name property: These Selectors can apply styling to all the elements of the same name. Name Selectors have a preceding hash “#” symbol and appear in blue.
  • USS Style Classes: You can apply a Style Class Selector arbitrarily to any visual element. Style Class Selectors have a preceding dot “.” character and appear in yellow.

Selectors also support pseudo-classes that can target elements in a specific state. Pseudo-classes are denoted by a colon “:” and modify existing Selectors, for example its :hover or :focus state, depending on pointer events.

If one element has several matching Selectors, the Selector with the highest specificity takes precedence. Name Selectors are more specific than Class Selectors, and Class Selectors are more specific than C# Type Selectors.

Overriding styles

Whenever you define a style class for UI elements, there will always be exceptions.

For example, if you have a group of Button elements, you don’t need to create a new Selector for each one. This would defeat the purpose (convenience) of making styles reusable.

In lieu of this, you’d apply the same style to all of the buttons and then override the specific parts of each one that are unique (e.g., each Button element could override the Background > Image to use its own icon). These Overrides are called UXML inline style properties. A white line next to the property in UI Builder represents an Override.

In the image below, we can see the selectors used by a Visual Element in the Inspector area of UI Builder and the StyleSheets section from where we can select and edit said Selector.

Designing with UI Toolkit: USS animations

As of Unity 2021 LTS, UI Toolkit includes a feature called USS transitions. These transitions allow you to create animations when changing styles.

Note: USS transitions can only be used with USS Selectors in a Style Sheet (not with inline styles).

Transition Animations

When displaying a Transition Animation, do it with at least two styles. This way, they can represent the before and after states.

Think of the transition between pseudo-classes of a Button – the :hover pseudo-class over the .green-button Class Selector. Each style has its own size and color.

To define a transition in the mouse hover state, select the .green-button:hover Selector, then set the Transition Animations, located at the bottom of the Inspector. The result is a Button that animates with your pointer movements.

The Transition Animation interpolates between styles with the following options:

  • Property: This determines what to interpolate. The default setting is all, but you can select a specific property in the drop-down list. In the above example, :hover state has a Property of Color and Transform.
    As the mouse pointer hovers over the Button, the Button grows larger and changes to blue. See this complete list of properties.
  • Duration: This is the length of the transition, expressed in either seconds or milliseconds. For it to be visible, Duration must be set higher than 0.
    Easing Function: Select an Easing Function that can approximate natural motion (acceleration, deceleration, elasticity, etc.) This kind of function makes the animation appear more organic than a simple linear interpolation.
  • Delay: Defined in seconds or milliseconds, this specifies how long to wait before starting the transition.
  • Add Transition: Each property of the new state can be animated individually, with different durations, delays, and easing effects.
    Click the Add Transition button to chain another Transition Animation. This makes it possible to trigger several overlapping transitions at once, making them more natural and less mechanical.

UI and input controls

You can set up some basic interaction with the UI controls. Both UI systems in Unity handle input differently:

  • Unity UI’s input (UGUI) implementation works via the Unity Event System. For example, a Button has an OnClick callback event where you can invoke both built-in and scripted methods. Unity Events can trigger basic GameObject functionality (e.g., enabling or disabling GameObjects) or invoke your own functions. They are compatible with GameObject and MonoBehaviour methods, and can incorporate Visual Scripting nodes as well.

  • UI Toolkit provides a comprehensive Event System that connects user interactions to visual elements. A C# script can register Visual Elements to callbacks from available events. Essentially, UI Toolkit separates the functional implementation from the UI design. It all starts with accessing a reference to the UXML or Visual tree doc, then loop through the items inside the visual tree and save the individual references to visual elements that you need to access from the code in a variable. The naming convention of visual elements is the common language between programmers and UI Artists, that’s how visual elements are found in the code. Once the Visual Element is found you can register it to callbacks, for example a button to a click event.

In the image below you can see how functionality can be implemented in Unity UI (left) and in UI Toolkit (right).

More resources

In our 130+ page e-book, User interface design and implementation in Unity, designed for artists, designers, and developers, you’ll find tips and best practices for building sophisticated interfaces with Unity’s two UI systems, Unity UI and UI Toolkit. Plus, get a walkthrough of the companion demo UI Toolkit sample - Dragon Crashers.

If you want to learn more about the principles of crafting immersive experiences, make sure to first read the blog post How to immerse your players through effective UI and game design.

Other helpful resources:

Thanks for reading! We hope you found this article useful. Let us know if you have any feedback about the sample project or the article series.

1 Like