So today we have started talking about the New UI System that will be coming in Unity 4.6. You can find the blog post here and the video here which covers a walk through of how the system works.
The New UI System is designed from the ground up to meet the following needs:
Fast execution
Low draw call
Easy to use
Easy to extend
Flexible API
Runtime allocation free
Over the coming weeks we will be releasing more videos and information about the new UI system where we will be revealing extra technical details, and information about how it works internally.
Right now though lets have a talk about the walk through video and the things you have seen there, more discussion about the other areas of the UI system will coming soon If you are a beta user feel free to talk about your high level impressions of the system and how it’s been shaping up on the beta list over the past few months (normal feedback to the beta list). We would love to see what you are working on
Update:
Will and I did a video about the new world space canvas!
Looks really great.
How does the GUI-Input work with for mobile devices? Is “OnClick” registering also Finger-Touches?
Is it possible to add Notifications to the Button-List via Scripts in order to fire custom OnClick-Events for specific logic?
Will be talking about the EventSystem more in a few weeks, but we use a unified event architecture for events, so when you touch and release it will get a PointerEnter, PointerDown, PointerUp, PointerClick, and PointerExit.
Yes. More on this a bit later also… But the UnityEvents can be hooked up via UI or via script.
Just watched some parts of the video. And while I haven’t been into actually creating/working on UI animation yet, these new elements seem to offer and awesome way of creating and modifying things.
Just thanks for your hard work in coming up with this system. Supersupersupersweet. This is really awesome !
Emphasizing since usually the positive comments tend to go under in the forum culture
We tried both methods, in the end we really felt like hierarchy ordering was easier to understand and works really well for a UI. It’s one of those things you don’t think will feel right, but then you try it and it feels really nice. That being said after you use it you may not feel the same way I do about it.
As a Unity noob that’s in the middle of doing GUI related stuff for my first project, I’m both excited and scared at the same time Excited, because these new features look great, almost Flash-esque in the way you can animate interactivity for things like buttons. Using world light and materials opens up a whole new avenue too.
But on the other hand, I’m going to have to re do a big chunk of my game to use this system. From the looks of it, it will be worth it and I’ll have a better looking and working GUI for putting some effort in.
At least it shouldn’t be hard to slap a one function script on it to manage that logic.
It is how photoshop works so I can’t see too many people complain that they don’t get it, although from a scripting perspective having to manage order is going to suck. I can already hear all the complaining about nested prefabs coming.
All event-based systems have something polling somewhere, but yes - you can set up a button to call a particular function on one of your objects, and then sit back and wait for it to happen.
It shares similar design ideas to NGUI but it’s moved a long way from that. It will have it’s own ways of doing things.
It always uses the transform sort order. If you enable alpha numeric you are just changing how the hierarchy displays things, not the underlying order.
What about creating ui elemnts at runtime? I guess will break Runtime allocation free part, but is possible to do? If yes how we would deal the sorting order?