[Questions/Thoughts] Unity 4.6, the new UI system

NOTE: If this is available to read about somewhere and I missed it, please post links! :slight_smile:

To keep it simple, I’ll refer to the new system as uGUI2.

After watching the overview of the new UI system I (like most of you) got the feeling that things are finally getting organised and we are getting closer to having a true, built in, solution. However, there are a few important things that were not explained in detail (since this was only an “overview”), and some left out (?). I’ll go over them one by one, some of them reflecting about differences to NGUI (since the author of NGUI was a “helping hand” in the development of uGUI2).
Mainly there are some major flaws in NGUI that I hope have not infected uGUI2 (Note: I’m not trying to be a douche, NGUI is an okay product for a third party plugin, but comparing to say the iOS/Android-way of doing things it got some major design and feature-flaws which is what I hope won’t make it into uGUI2).

The arguing going on below is IF it is the way I hope it’s not. So if I’m wrong, I’d be more than glad to know! :slight_smile:

The “anchor system”:
Now, just the name “anchor” gives me shivers down my neck, since this sounds and looks a lot like the anchor system implemented in NGUI. That is, for me, horrible news. Not sure how to explain this in a few lines, but I’d really like a more thorough explanation (like, where are all the input values?). If I want it to half the width of another rects width, where do I input “0.5”? And if I want it to be 50% of the width + 10px, where do I input the +10 constant?). There is a mystical anchors-array visible in the video, however it is never folded out… What possible values can be altered manually? Seriously, dragging around anchors is a very bad practise for more than just “close enough”, temporary, placement.

Most importantly about this anchor system: How does it compare to a REAL constraint system? A constraint system is so basic at it’s core, and gives so much flexibility (and is very logical to think about). After using NGUI’s anchor system for a day it just felt extremely hacked in and with loads of special-cases (just looking at all the if-statements and every-frame-updates made my heart hurt). Basically constraints work by taking one property-value from rectA (say left), run it through a linear equation, and output the new value to some property on rectB (say right). Now with a very basic, logical and readable equation rectB is placed with right edge to rectA’s left edge with some easy to manipulate values. The equation is just outputValue = inputvalue * multiplier * constant (never seen that before!). Read here to get a better understanding of how apple does it. This is what I created myself for NGUI, and the speed at which you can get content in and have it automatically & smoothly handle different resolutions is… a lifesaver.

Talking about Rect Transform, what events can be expected? Not talking about the hardcoded messages (OnColliderTriggerWhatever) but actual callbacks that can be registered to to notice changes in size etc.? And a followup questions that I was actually leaning at: Is the anchor system event-based or updated every frame?

Answer: “It’s event-based.”

Edit.
In the video I saw no way of determining which object to anchor to (parent was default). Is it possible to set this manually? (Expecting the answer to be yes of course, just checking!)

Property Binding:
Property binding is one of the most essential components of a flexible and easy-to-use UI framework, but nothing was unveiled in the video about this… Is this implemented of some sort, or have it been overlooked? I wrote a very handy binding system for iOS once. It allowed you to, with just a small line of code, to hook up one property of some sourceObject to a property on another targetObject (with the ability to choose direction). No pulling, no expensive checking, just event-based binding. Think MVVM.
Ideally it should be possible to do it nicely in the inspector (just exposing some property through code, being the “source”, and then hook it up to different “targets”/widgets which will update automatically.).

  • Value Converters:
    This is part of the property binding and MVVM.
    Value converters are a very clean and handy way to “puzzle up” none-matching values (eg. a boolean converted into some usable int value). It’s extremely flexible, and conforms to the “open/closed” principle. You’ll avoid the need to add custom stuff to others code that will be replaced in a future update (which is a stronghold you never want to get into).

Empty view/widget:
Looking over the list of possible UI components, I didn’t see an empty, “invisible”, view. Setting up UI is not always about sprites and labels, but a lot of the setup consists of positioning, grouping and encapsulating objects to setup a logical hierarchy. Of course there is some core base-component representing a single view. Is this “Rect Transform”? Anyways, it should definitely be available in the UI-menu.

Clipping subviews:
IF Unity was smart enough to make a core component representing a single view (no graphics, just the logic behind a frame etc.), there should really be the possibility to chose “ClipSubviews = true/false”, meaning that if a childView (say an image) is moved outside the bounds of the parentView, it will be clipped/culled. Now, I saw the masking ability but again, this is a core feature and should really be an option on all UI components (deriving from the core “view”-component).

Aspect/Content Mode:
This really just means the possibility to determine how the content of a view is laid out, think about an image for sake of argument. The ImageView is the UI component dragged around and determines the bounds, position etc. The “Image” is the rendered content inside the ImageView. The most common is: Fill, Aspect Fill & Aspect Fit.

  • Fill: Stretch the image.
  • Aspect Fill: Stretch the image until the bounds of the view are fully covered, still maintaining the aspect ratio of the raw image. This is where “ClipSubviews” comes in.
  • Aspect Fit: Stretch the image until the first edges touch, making the image fit into the bounds and still maintain ratio. If the ratio differs from the bounds, there will be an empty border (think you get the point).

That is some of the things I had on my mind! Feel free to comment, but add the correct headline from the corresponding part so it’s easy to follow up!

Cheers!

This one is easy

You create an empty GameObject with a regular transform as your ‘group parent’, and it behaves as expected - full 3D rot/pos/scale transforms behave just fine as part of the hierarchy.

Or you can add a Rect Transform as a component and it replaces the standard transform. A Rect Transform simply appears to be a constrained Transform with some extra parameters.

My foundation toolkit supports data binding using the MVVM / MVC pattern. It uses a variation of IPropertyNotifyChanged (I included the changed value in the event in order to skip a reflection call), but, other than that Is pretty identical to WPF style development.

I should have a beta using vs 4.6 up sometimes next week.

http://Unity3dFoundation.com

The anchor system is also easy:

You can set the anchor to a range of min (top left) - max (low right), for a basic gui you might stay within the range of 0 to 1.
You can go outside of the parents box if you set the anchor to higher or lower values.

The actual meaning of anchor represents a Scaled Rect compared to the parent’s size.
The pivot is the point of fixation (in local space) around which the child is transformed (rotation, scale, etc).

So far all of this is done in a nice percentage like factor system within the ranges of 0 to 1.

If your screen is 1090 pixels then 0.5 will get you half that amount, as will 800 * 0.5.
Correcting for aspect ratio could be done with a simple aspect ratio correction script that sets the scale of the main canvas rect.

The pos vector and the width & height are where the insanity creeps in for me, as it suddenly changes into a pixel based system, with little to no translation visible for us.
You should add a ‘Reference Resolution’ to the root canvas as soon as you create it, this should help resize the gui when using pixel based offsets.
For me this is met with very mixed results, where I already scripted a class to autoscale the ancient UnityGUI, this ‘Reference Resolution’ should do the same but for some reason fails, even after toying with the width vs height slider.
I have not yet gotten it to properly resize my gui, am I crazy that I just want the root Canvas to fit the screen and then everything on it to actually remain on screen, without having to micro manage all the anchor settings?

The ‘Reference Resolution’ is not set to the screen-size in the editor at this point, so anything you do and or change is met with a few teething problems, dis- and then enable the script to actually update when you set the resolution (I think they might have forgotten a OnValidate in the component) and to see the results resize your Game window a bit to see it stretch and shrink.

Setting your anchors with the built in Rect Transform presets makes a lot of sense, until you start to think about multiple resolutions and orientations, then it quickly becomes your downfall as it is doing all kinds of silly things without explanations.

Try creating something like a skill tree where anchors to a specific screen corner makes no sense, I was hoping the scaling would have become a non-issue, but I hope they fix this with better explanations, sourcecode or fix the reference resolution component.
I hope they release the source (or maybe it can already be found somewhere) for the new GUI components, they are unsealed so that is already pretty great.

Thanks!

Alright, then I just hope unity includes it as a UI component in the list (to acknowledge that they understand the importance). Again, this is a core component for building a solid UI (readable, logical and ).


Thanks for your answer, always nice to see valid third party alternatives! But my question/concern is more directed towards Unity implementing this into the engine itself.

A UI system goes like this (most important first):

  • Rendering the UI
  • Positioning the UI
  • BINDING THE UI

So I’ll be deeply concerned if they forgot number 3! :wink:


Thanks for the lengthy and detailed answer! Sadly, this new “anchor system” seems to confirm my fear (multiresolution issues etc.), same issues I instantly saw with NGUIs anchor system (not saying this is the same, but the name suggest they work with similar logic)… Guess I’ll have to check, but I will for sure be ready to port my constraint system the second I start seeing these issues. The beauty of a constraint system is that if setup correctly, it just works in all possible resolutions, doing exactly what you expect.


Any info on what events can be hooked into on the Rect Transform? If there is no “OnRectChanged”-variant, I’ll be more than sad!

please feel free to share such a system with Unity or the community :wink:

I’ll first check out the new anchor system before I say to much, but if I port my constraint system, I sure will! :slight_smile:

Stealing this thread a bit. :slight_smile:
In short, after reading through the docs and viewing a few videos I am left with the impression that implementing a relatively complex UI that is heavily driven by code (and uses callbacks for many types of actions such as drag/drops etc.) seems more cumbersome than with for example NGUI?
I had hoped the various events would have been more easily exposed and setting up elements through code would have been more streamlined, but perhaps I am getting the wrong impression?

I would really appreciate if someone with NGUI and 4.6 experience would weigh in a bit. Many thanks in advance!

I would really appreciate if someone with NGUI and 4.6 experience would weigh in a bit. Many thanks in advance!

I just finished moving over my databinding / mvvm library to uGUI. I was able to get everything I used over easily. The only area of confusion is regarding to the “repeater controls” (Vertical Layout Group, ect) which seems to misbehave from time to time without much explanation.

That said, I have not played with drag /drop. Maybe post some code ?

Place a EventTrigger component on your UI object, select a trigger event from the drop down, add the gameObject to the slot for the event and from a dropdown select the component or object the event affects or a function from a script and if there is a var to pass there is a slot for that too… No need to script the events if that ain’t yer thang as this components handles all that with a nice GUI.

Use an EventTrigger component. OnPointerDown set a boolean such as startDragging = true in a function StartDragging() and OnPointerUp set startDragging = false in a function StopDragging(). In the Update loop if startDragging = true then send it to an UpdateDragging() function that has your code in it. Just drag and drop your scripts in the slots, choose your functions from the dropdown menu and supply any variables you may need to pass to your function.

Trying to reply to a few things here that were not already covered in previous replies.

It’s event-based.

No. Similar to how Transform positions are always relative to the parent, the same is the case for RectTranforms.

For custom control of the layout of UI elements, you can write scripts that partially or fully control the positions and/or size of a RectTransform based on whatever logic you want. The built-in auto-layout components do this as well; for example the HorizontalLayoutGroup, VerticalLayoutGroup and GridLayoutGroup.

We don’t have any property binding in the new UI system.

What you’re describing is achieved just with a GameObject with a RectTransform and nothing else.

We have an entry called Panel, which usability testing showed was the term many people expected to find. It does include an Image, but that can be easily be removed if you don’t want it.

Look for the Mask component.

What I meant was that we do use code for a lot of the UI as it is complex and driven by external factors. We want to do it that way, and the inspector workflow (while nice) will not work for us. We need efficient script access.
NGUI, while sometimes convoluted, is easy to tap into via scripts. From what I have seen this far the new UGUI seems to be the opposite - simple to set up via the inspector, but quickly gets cumbersome when you try to control / set it up via code. But that is just going by a bried read through of the docs and reading threads here. I may be wrong, and I hope I am. Comments? :slight_smile:

It’s designed to be fully usable by code. If you’re having trouble or find something to be cumbersome, we’ll need more specifics to be able to provide advise or consider revisions.

Thanks for the reply!
We have not jumped in yet and please do not interpret my questions as any form of criticism - I am merely seeking feedback from those that have jumped.
I do so since threads like this one makes me a bit cautious, and the percieved need to hook up the event system which is something that is fairly transparent and easy to set up in NGUI:
http://forum.unity3d.com/threads/creating-a-gui-from-code.263563/#post-1743543

I played about with it a bit yesterday and as I posted in this thread, http://forum.unity3d.com/threads/creating-a-gui-from-code.263563, creating UI elements from code without using prefabs seems rather cumbersome. I’m wondering if there aren’t any functions to just create a button in one go, like when you create one through the Editor GUI.

If there aren’t any yet, that’d be a great addition imo.

What is the reason you don’t want to use prefabs?

There are a million different ways to create a button with different look and feel. It’s unclear to me why we should have code to create a very specific type of button, when you can just create a prefab that has the look and feel you want and then instantiate that from code.

The only reason we don’t also just use prefabs for the menu items in the GameObject > Create UI menu is due to some technical issues with how built-in assets in the Editor work, that doesn’t apply for user-created prefabs.

“We don’t have any property binding in the new UI system.”

Actually the UI system shouldn’t have it’s own binding. Unity as a whole would really really (really) benefit from a generic system with editor help. I teach 100’s of students and it’d be a huge help to have something in the editor to use real c# events. Events are key to good structure but Unity provides no help or guidance, especially for beginners.
Currently I use a modified (and bug fixed) version of the free Asset Store component “ZZSignalSlot”. True 2 way loose coupling with real c# events is the best. It’s great, but a bit cumbersome to have to add a whole component for each event listener. If components could show C# events just like they do public variables and allow drag-drop to other compatible functions elsewhere (eg start drag, grey out or remove GO’s and components with no compatible functions, drop. Show from listener their subscriptions), it’d be beautiful.
Please consider for Unity 5 as every part of the engine could use it.

Did you check the new Event System from Unity 4.6?

I was thinking of how to replace our current GUI framework and integrate the new uGUI into our workflow. Right now we’re very code focussed in this area, so I kind of got stuck thinking along those lines. Been mulling it over since then and I think you’re right prefabs would probably be the way to go. After all, something needs to replace the old GUISkin with all its styles.