I am making a vehicular game (a PC game not an “app” :P) played fully from inside a cockpit. I’m feeling just doing the HUD of the thing as neon 3D wire-frame boxes, actually placed floating in front of (possibly a child of) the camera. This thing would basically be an armature rig with no animations that would be manipulated in C#.
Throttle body, current speed. That part would be downright easy. Rectangles slide with variables in player motor and input classes. Another coinciding with and ammo pool.
Sure. Neon is not always a popular choice for UI, but 3d ui is not uncommon. There is (or was) an asset in the asset store that pretty much does what you describe (though I don’t neon was a default option)
I was looking for personal opinion / feedback. Also, you didn’t even read the OP; and you linked a google search for “Neon UI.” I didn’t ask you to do any work for me, and you’re being a colossal baby for reacting to this thread in that way. This forum is labeled “gossip” not “thesis paper research”; if you don’t like a thread you can get out and not shitpost. I didn’t ask “how make MMO.”
While I have extensive knowledge of games themselves, I can’t think of any examples which use this method to render an entire HUD specifically; and I don’t know about every game that exists. Relevant search strings for this topic return results I am not interested in, due to most pages being about the games themselves and not how their individual elements are implemented.
Think about the hands/gun model that appears on your screen… while not technically UI… it sort of is… its a 3d model that appears only on your screen, showing interaction that you are controlling.
That’s how NGUI et al. and presumably the new Unity GUI do it, as well - the GUI bits are geometry.
Instead of parenting it to the main camera I’d make a dedicated camera just for that, though. Might give you more control, might save a bunch of transform work, unlikely to hurt anything.
Right, this makes sense. I hadn’t thought about it that way, but usually there is a separate first-person viewmodel; exactly what I was thinking about doing, really.
For anyone interested, I ended up deciding on a combination of two methods. I am going to use the GL class to directly draw a few primitives.
Since I’m not that skilled at math, I’ll use a small model placed in the user’s view-port for a small, spinning 3-D representation of the player (which will serve as the health read-out), displayed and rotated as if it were a third-person copy of the player.
Could you (or anyone) elaborate on this technique? I’m not afraid of programming, but I am just starting tow arm up to Unity’s API and methods. I assume you are referring to placing a camera at an arbitrary point outside the game space, placing the element in front of it, and then projecting it into a window, possibly through GUI.Window?
Well, I grew up reading; so it’s not difficult for me to express myself in words. When I started using the internet there were no images, and in school we directly navigate the file-tree without a GUI on the regular, and the same goes for at work.
The thing is, I didn’t go straight greybox. Spent weeks learning Blender, modeling a main “character” (if you can call it that) and animating it. If it was greyboxed, I would share; but it wouldn’t help because the topic is hypothetical. As in, there wouldn’t be anything to see yet because I haven’t implemented it, and was curious about how others would implement it. As it stands, there isn’t really enough meat on my game’s bones. Since I do intend to use those assets though, I shouldn’t show them yet. They don’t even have materials / texture / bumpmap / normal map yet, since I chose to leave that for another day.
I have a new-found appreciation for organic 3D forms, that’s for sure. Fortunately my concept needs none.
I learned that there are several ways I could do it, and I will be using a combination of different classes to achieve my end result.
1.) Drawing lines / meshes using the graphics class in the form of thin polygons.
There’s the Gizmo class, too, but it falls outside of acceptable use because it’s intended for debugging purposes (and is very limited anyway).
There are many ways to draw primitive elements for use in a HUD without even touching the GUI library, which I assume people use for this. So it’s more of a discussion possible implementation rather than a specific question.
I get the feeling you guys get lots of “how I pick up gun” threads, huh.
I grew up reading and writing but that’s not the point. We’re not illiterate: we asked for a mockup because your writing is unclear and doesn’t explain much, so we don’t really know how to discuss implementations. Either you’re doing something pretty simple (but it doesn’t appear so, since you asked “anybody seen anything like this yet?”, which implies you believe it’s pretty original) or you didn’t explain it well. I can assure you we get a lot more than “how do I pick up gun” threads here (though ok, there are those too).
I would make a skinned mesh model in blender or maya for the entire cockpit and instantiate it as a gameobject in front of the camera.
The wheel would be attached to a bone. Each lever would be attached to a separate bone, etc. Then all you need to do is animate the bones in C# to move the different parts.
Why this you may ask? well, if it is just one skinned mesh and you move the individual bones, you can draw your entire ui with a single draw call, yet move each part independently. This would have excellent performance.
Also by making the model in maya or blender, you can make it look exactly how you want it in a visual professional environment, and allows you to make changes to it very quickly. You can even outsource this work if you know a 3d designer.
This seems perfectly feasible and actually a great approach. Your main goal performance wise should be to reduce draw calls (not polygons) and this would accomplish the entire thing in 1 draw call. Heck, this would even be faster than uGUI since there is no need to create the mesh at runtime for the gui, but it would be created from the imported model at compile time.
Unity has to do skinning. But since it is a robotic kind of mesh, he only needs 1 bone binding, which is very cheap (cheaper than your typical character that has 2 and 3 bone binding). Skinning is also going to be much cheaper than rebuilding the mesh when something moves, which is what NGUI does (probably uGUI too), or having separate draw calls. Skinning can even be hardware accelerated depending on platform and video card.