I’m creating a flight simulator and want to develop the multifunction displays (MFDs) found in cockpits. They’re screens with buttons around the side.
Depending on the mode and submode and subsubmodes, they should display different graphics (scripted text and sprite elements). The menu options are controlled by the buttons and are in 3D space so I don;t think the UI system is appropriate(?)
My question is: what is the best approach to script up the behaviour of these parent/child menus? I considered sprites/text on a world canvas and using Enums for the modes and using a bunch of if/else statements to model the correct behaviour. Scripted Sprite prefabs would be instantiated/destroyed depending on the mode
The scripting approach seems overly clunky for complicated nested menus and doesn’t seem easily expanded. Any advice? I vaguely thought of an xml format but wouldn’t know where to begin.
Each display can be a quad with a render texture with its own dedicated camera. Doing this means you’re not limited in the type of elements you can render. And you’ll need to do something like this if you want to display the aircraft’s or a missile’s camera on one of the displays.
Sure, I can use a render texture for a TPOD mode but I want to be able to enter lots of different modes/options just as a real MFD. See the link image for an example - the different buttons enter different modes and the text next to them changes depending on the child menu options. How best to model the different modes and corresponding behaviours? Am I stuck with long if/else blocks?
If you don’t want to do the if/else blocks then you can use arrays of delegates or simply have a script on each button and when you switch modes the scripts are removed and replaced with new modes behaviours/scripts. Although admittedly it’ll take a lot of scripts!
MFDs are basically a tabbed display: some buttons map to the tabs themselves, bringing them up when pressed, and other buttons are “soft mapped” to each individual tab.
You could just make a regular old tabbed display (see any tutorial for how) and then replace 100% of all buttons on each tab, keeping them identical between each other.
OR if you want to do the extra work in code, you want to have each button simply emit an opaque identifier, such as a string, and then have various controllers listening to those user intents and triggering changes in UI.
Personally I freakin’ HATE dragging and dropping and making a giant brittle prefab laden hierarchy for these things, even though it is “The Unity Way™.” Doing so results in something that’s impossible to understand even the very next day, and when it misbehaves you will find yourself squinting and clicking through opaque hierarchies of overlapping UI items, NONE of which you will be able to select from the scene, now that this functionality has been broken in the UI scene window for several years now.
I prefer to author the entire display with tiny stubs for data output and user intent input, all connected to a super-generic string-driven system that knows nothing about the function of the display, just provides “fabric” to transfer data in and out of the UI.
By using clear unique strings for each item (such as BUTTON_TOP_1, BUTTON_RIGHT_1, etc.), debugging and hooking up is done directly in code with a bunch of const string s_Button1 = "BUTTON_TOP_1"; constants and when something misbehaves you can trivially print the string to a log, or set a breakpoint and see what’s going on.
This is how my Datasacks package works. Here’s a block diagram:
If you are using any version of Unity later than Unity2021, it may be necessary to add this line to your Packages/manifest.json, or add it via the Package Mangler:
Regarding the second question. You could have an array that has an element for each MFD mode. Each element of the array would be a structure that contains all the MFD elements for that mode. Changing modes would simply change the array index.