I have posted a detailed design concept in the above thread regarding a better approach to modular rigging workflows. This design allows us to take our Modular Rigs and apply Procedural Animation to them (in-engine) using constraints and “module” components in a similar way one might combine and overlay multiple filters in Photoshop to achieve a specific image result. In our case, we would achieve a specific kind of rig.
Please see my posts in that thread.
Right now, working with Modular Rigs (mainly creating and applying constraints en-masse) is quite painful for those of us who do not have special tools and/or are not mathematically-inclined.
Modular Rigging is too granular at the moment, meaning it lacks very important cross-over functionality that can (and should) be able to be applied to any group(s) of bones (i.e. “Bone Modules” – which contains a group of bones, such as an arm or a hand, which can be displayed however the user likes, and can contain an Animation Curve that defines Global and Local Constraints that are applied all the way down the bone-chains of each “Module”).
This “masking system” might be somewhat akin to the Humanoid diagram where the head is a “module” since it has eyes/neck bones, the body is a “module”, since it has sockets and bones for the limbs, and the hands / feet are each different modules since each has fingers/thumbs/toes appended to it and a socket for the wrist / ankle that lets it plug back into the body “module” in the “ankle” socket. The hands could be mittens, bug-hands, ninja-turtle three-fingered hands, or whatever – this would get mirrored over to the other side (based on bone-naming conventions.) The user would click in the scene view to add bones to the mask manually, or the bone-names would match with the various hands/feet/body “modules” and they could be automatically slotted in based on their naming conventions (to speed up rigging).
Procedural animation is just at our fingertips:
Constraints, if possible to apply as a ripple all the way down a bone chain (with an Animation Curve to define the weight of their effects on each bone in the bone chain), could allow easy non-scripted ways to make a rig behave as we want it to (i.e. with a spring-dampen Constraint down an arm chain, or a finger chain, an “ease-out” Animation Curve would decrease the “spring” factor on the more distant digits from the socket origin of the hand’s Bone Module. Applying the curve to the “dampen” factor is possible too, and would look neat as an “Ease In” Animation Curve. Think of the visual possibilities!!)
Virtual bones would allow us to build “sensors” attached to “modules” that will then enable us to script the sensors for a given set of bones (“Bone Modules”, which also contain sensor indexes – i.e. sensor 1 , sensor 2, etc.) – For example, while in a “walking” state, “sensor 1” checks for a ceiling and “sensor 2” checks to see where the next foot can be placed (or if it must go through a wall, it sends a signal back up to the calling code to tell the player to go to a “stopping” state). Programming sensors as if they were throwaway variables is a great way to program animation without complex state machines.
These are “killer-app” features for procedural animation. I hate to think that a keyframing approach is being done on top of a crusty-old “frame-by-frame” system that requires artists to manually do the transform and blending maths without the proper hierarchy of easy-to-use tools to enable the fundamentals of procedural animation.
Is something like this being considered? – Or do we have to program it ourselves?
@davehunt_unity – Also, don’t forget about the “Center of Mass” and the “Spring + Dampen” constraints!
These constraints are special and would have to be applied to a chain of bones (and/or modules) all at once (again, using an Animation Curve to distribute the weight down the chain of a particular module / group of bones), helping characters remain upright (or maintain a sense of weight) and stay “jiggly” (or not) where it counts most!
Also a “Pose-Blend” constraint would be nice too. This would allow one to blend a group of bones (Bone Module) to a particular pose keyframe from any Animation Clip – Separate bone modules could grab separate poses from separate Animation Clips. Legs could run while torsos aim and arms and hands shoot. And if a module is missing a bone (or has an extra bone not found in the Animation Clip), the bones are simply ignored while the next constraint freely processes which ever of these leftover bones it wants. Constraints can be weighted of course, so pose-blending could be controlled with a sliding value, letting other constraints applied to the module have a greater effect on the final bone position if desired.
I do – In terms of how I use it, I’m mainly just making my own tools for UI/UX as I need them so that I can manipulate the animation much more easily. I don’t aim to spend a lot of time on these tools though, since DOTS Animation is supposed to be in the pipeline at some point.
That being said – I’m not sure how much longer we’re looking at on that front, so I am not entirely sure this is going to be possible. I would like to start moving to a DOTS-only approach, but I’m pretty much using Animation Rigging for nearly everything animation-related right now.
Yeah, as a beginner I got caught up in all the DOTS stuff last year, only now it is on the back burner, so I just decided to use the current tools for now.
Should I use 10 TwistChain Constraints for the fingers?
It depends on your project and what you’re trying to do.
In VR, for example, this could be fine (though the second option might still be a bit better?) – In a typical character though, you’d have three things:
an animation rig for the hands,
an animation rig for the rest of the character’s body,
‘animation’ clips with the hands in different poses (that are played only on the hand rig, rather than on the whole body rig that doesn’t include the hands).
The idea is that, if you want a peace sign, and have 3 animation clips – open hand, fist, peace sign – then setting peace sign animation (on the hand rig – not on the body/character rig) weight to 1 and all others to 0.
This gives you those fingers posed to the correct pose. You can “blend” the peace sign animation with the open hand to “lift” the thumb, ring, and pinky/small fingers all at once (into an “open hand” pose – without playing the “open hand” animation directly.)
About the Animator…is there a way to make it behave procedurally? I’m just getting into it. There’s that jarring thing between switching animation. I need to look into what Blend Trees are.
I’ve been reading a lot of mixed things about The Animator. There’s always complaints on the forum, and then there’s the Animancer website where I just gave up following the reasons why Mecanim is bad…none of which I can attestify to. I just adopted Bolt, and trying to integrate animation that way because typing C# is very difficult for me.
I liked what Wolfire games presented, because then there is no jarring change between animations. But how does one achieve that in Animator? and how does that effect the Animation Rigging?
Also…is there a way to side step Animator completely? with just Bolt? I mean…if we’re just dragging windows about, Bolt is very much doing the same things. Can I reference the animations that way.
First, I want to mention that Mecanim and the Animator are completely different systems.
So, technically, the Animator itself is really the underlying “transition” mechanisms, which used to be built underneath the Mecanim “State Machine” logic. This has since been decoupled over the years, especially upon release of the Animation Rigging. Originally, it was impossible to “blend” more than 2 animations at a time. Then came Mecanim which let users blend as many as they want (with BlendTrees). But this comes at a huge CPU cost. The two approaches are not initially “compatible” out of the box either – at least with Unity’s approach to state machines. To make it “simple” for users, Unity thought it was a good idea to couple state and animation logic together in a weird and convoluted way that only seems straightforward – on paper. This is likely what is causing the “jarring” that people complain about, as the “logic” must still rely on frames to be complete, while blending is somewhat independent of the state logic (as far as I can tell) and must operate as fast as possible (and therefore be executed and possibly distributed across multiple CPU cycles to render an animation frame. This was necessary in order to decouple the two systems enough for something like Animation Rigging or Playables / Timeline to work.
To answer your question though, underneath Mecanim (state machine) layer is still the Animator (just decoupled from its Mecanim counterpart’s state-machine approach). However, nowadays, it is based on the Playables API (the same thing that runs both Timeline and Animation Rigging), and as far as I can tell, Mecanim is hooked into this Playables API also in order to standardize functionality across the different tools. Because you can use Playables (aka Timeline methodology) to control animations, they are now (by default) able to be controlled procedurally through the Playables API (just like what Unity does with Animation Rigging). So, theoretically, you could even remake Animation Rigging completely if you wish – just as long as you use the Playables API as a base.
Regarding the Wolfire Games approach – You simply cannot do this in the Animator itself. You need to go a bit lower-level. They directly change the actual frame-by-frame interpolation to “animate” pose-to-pose (based on a curve-as-interpolation), which is usually defined in the animation clip itself (through the .fbx file as it is imported into Unity). Interpolation is usually determined by Unity on a low-level. But interpolation can still be written in Unity.
In Wolfire’s approach to interpolation, before it applies this interpolation, they also plug the physics simulation into their initial pose and target pose and base the actual secondary “motion” (floppy ears, heavy arms) on top of their invisible collider’s movements for their character controller (to handle side-to-side movements of the ears, when running and turning for example). This allows some bones to have a greater stiffness than others along different axis directions. Looking carefully at the motions, you will notice that when an arm moves to its target pose (say, in the walk/run), you’ve got no sense of “floppiness” in the forward/backward direction to the hands or arms (because the bones in the arm are only “floppy” or “less stiff” in the vertical direction, and so far, while running, no vertical motion is being made). However when the character jumps or LANDS after a jump, the arms will indeed appear slightly “softer” or “floppier” than during the run. This is because the “rolling” sphere they use to move (or “skate”) the character around ends up causing a upward/downward “force” on the arm bones of the (closest) target pose while evaluating the curve. The most important thing to note is that physics aren’t applied per-bone (although some bones react more heavily to physics in a particular direction – i.e. the ears react more “floppily” to forward/back movement than up/down movement of the collider, whereas the arms react more “floppily” to up/down forces on the invisible collider as it reacts to the world).
Does this make sense?
To answer your question though – as far as I know, interpolation itself cannot be controlled through Animation Rigging, or even the Animator alone. However there is a more general (underlying) API that was introduced just before Animation Rigging (that Animation Rigging and Playables both seem to use) that allows one to modify the actual interpolation mechanism. (C# Animation Jobs is its name I believe). I’ve never delved too much into this system (as Animation Rigging solves most of my problems), but I would still be interested if you were to build a version of this Wolfire Games system on top of the Animation Jobs / Playables API. Unlike a lot of people who have tried to really understand and emulate the Wolfire Games procedural animation system before, I’m pretty positive I’ve actually cracked the “secret” formula. So I should be able to answer any questions you might have should you (or anyone else) try tackling a system like this. I would totally partner-up with you.
Finally – The explanation above should answer your question here. Technically, the answer is “sort of, but not entirely”. Bolt doesn’t have the ability to handle Animation Jobs (as far as I’m aware – I may be wrong though), so the Wolfire Games approach wouldn’t be possible using Bolt. However, the Animator itself is just a final “layer” that pulls together all the various Playables into a single frame to be rendered, so bypassing the Animator doesn’t make a huge amount of sense in general. If you modify the Playables (which synthesize the procedural animation under the hood) by using the C# Animation Jobs approach to affect the interpolation, the Animator would be necessary to display that on-screen for now. Under the hood though, you finally have access to everything – assuming you know what level of access you will need – and assuming your “animation” is actually bone-based. Sadly, Bolt is extremely limited as far as what it can currently do with C# classes, etc.
Hey, thank you very much for the reply. It looks very detailed, and I will be going over it over the next couple of days so I can process it properly. You definitely live up to your name.
Wow. Thanks for the answer. I really appreciate it.
Maybe it is just a mental thing of me trying to not use the animator, when really i should learn it, so I have been trying to go around this whole animation stack, which is probably not smart. However I have been trying to learnt he basics, and now I’m a bit ok with knowing that it might possibly take a few more weeks till I know how to integrate Animation Rigging with animator. I really took the approach of “I like animation rigging, and mecanim/animator is supposed to be bad, so I will try to use the one without the other”, when in fact Animation Rigging is built with integration with Animator in mind.
As to the Playables API, I was looking into that, but it seems not to have any kind of learning path, no tutorials. Maybe sometime in the future when I know more of Unity, and can read/write code more comfortably i can try seeing if i can make sense of it.
It takes years to learn this stuff. i wish i knew all the things about how these systems integrate so i can start actually making a game, but I always seem to be learning foundational stuff.
Anyway. thanx for the guidance oh wise one. I may pop up with more questions later, once I’m more familiar with the basics of the animation systems.
The reason I am so vocal around here is because Unity doesn’t seem to realize – most people, even those with plenty of knowledge, are always “supposed to be” making games – yet, instead they’re always learning foundational stuff. BUT, out of those who manage to make a game in the process of endless digging into the technology they need, these people simply perpetuate the myth that they already have the tools they need to do everything they want to do. How else would they have made a game?
However, like you pointed out about the Playables API – there are typically no “learning paths” (as you put it) to much of the important fundamental knowledge needed. Those who have attained that knowledge have either A) written the tools/papers themselves to show and demonstrate (empirically) that knowledge is out there, or B) have some “inside” connection with those who have that knowledge already (who have not yet demonstrated it). But for those who happen to know stuff about the Playables API without following these two paths, the only other option is that these people have been involved in social circles where this knowledge is experimented on and then passed around freely. So that might be a good place to start, rather than doing it all on your own.
Like you said – it takes time. But you don’t have to do everything all by yourself. That’s what friends are for. And it takes time to make friends too…
This is why it sucks. So much of this knowledge is spread out, and knowing what it takes to put it all together into one cohesive beast is not for the faint of heart – and it’s not like anyone tells you either.
I personally have been making small protoypes since the late 90’s, but I have been interested in game technology since I first laid eyes on Super Mario Bros. I’ve always been thinking about it. Though, even I have yet to actually complete a full-fledged game. Not because I can’t – but because I don’t want to. Not just yet.
I came close once, however, small prototype after small prototype eventually made me realize (like you seem to have) that there was always some technological roadblock that prompted me to seriously study more and more of the fundamentals in technology in order to realize my vision. Technology (and the tools to build it) simply weren’t there yet. Years later, my life has become less about making games, and more about trying to find the right tools/mindset to help OTHER people make games. This teaching has become the source of my knowledge. And as much as it sucks for me to not have ever made a full game – I have made plenty of completely functional prototypes from just about every dimensional genre, and even one full-scale (networked) game that was just too early for its time – so I know how, and can truly explain, in great (nuanced) detail, what is necessary to make a game of ANY scale or type, 2d or 3d, and with any kind of team (or even completely alone).
I’ve known quite a few people over the years in the AAA space – and I don’t envy them. Only their technology.
And if you’re wondering why I’ve never completed making a game – The answer is, deep down, I never wanted “making games” to be a job – at least not without the proper (fun) tools to do it. You don’t typically get those as an indie developer. Had I been a programmer/artist for a AAA title, I might have had access to those tools. But those came just a little after my time.
Though, now that I have the skills, I kind of wouldn’t mind a job designing the tools for making games. I just haven’t stumbled upon that job just yet.
If I had it to do over again, I wouldn’t change anything. Somebody has to push for better tools and workflows – so it might as well be me.
However, the way I look at it – I think it is too easy to be ambitious in game design. You are much better-off using the few tools you have available, and making them work as best as you are able to. And if you find yourself lacking in skill on some technical front – simply limit yourself willfully to using something less technical, but five-times as creative. This is what it takes to make a hit in game design – and this is something anybody can do.
Yeah. This foundational stuff thing…it’s so true. The tool chain is so long that you can’t really specialise and stay in one area, and theres so many systems to learn, and you run into roadblocks where you need to learn another part of Unity, and that takes months to learn, in paralell with other things. At least it’s fun.
That depends on what it is you’re trying to do – and on what kind of person you are.
I tend to find it fun learning new systems – but that fun quickly fades when I realize how poorly-designed some of them are (and how many tools/workarounds I will need to achieve my vision). At this point, it can often get overwhelming for one to justify.
This foundational stuff can definitely get in the way if you’re not careful – and Unity is designed in a way that promotes endless foundational learning, so you’ve gotta watch out.
It’s like rebuilding your car’s engine everytime you want to drive a different speed to the grocery store. Sure, the parts are all there – but what is the point in knowing the makeup of the engine and how to tweak the combustion system when all you want to do is speed up (or slow down) a little along the way?
In Unity, this nuance is lost – You have a speed-up system, a slow-down system, and a combustion system that this all extends from – but each “system” is built in isolation. While this is fine when it has to do with one function and scope, a complex scope that may or may not have multiple parallel functional equivalents is another story (i.e. not just speed and braking on tires, but also on airplane wings and jet boosters too, all in the same vehicle). At this point, it is a great idea to have a single system to handle speed-up and another system for slow-down. There is a lot of complexity there, and the nuance achieved by placing it in separate systems makes sense. However, Unity does this with the smallest of features (for example, Scene Visibility, which should really be expanded into editor LOD/Streaming territory rather than visibility of gameobjects as the be-all-end-all of its functionality), and it is this precise tendency that holds its technology – and users – back.
It’s a tough cookie for me, speaking on how an engine should work. But I do know when I decided to do do this I thought it would be so much simpler. Like…I thought theyd have a player controller “as/is” with all the functionality built in. And I thought the animation system would be human friendly.
It’s all very logic driven, which is great when you know tthe logic, but not so great when you have to learn it, and it seems so abstract.
But I’m gambling if I just stick with it that things will start to feel more intuitive. I think Unity will strive for this upper layer where the minutia are all dealt with, but that could be decades, if that kind of higher game development enviroment is even viable. Such as Dreams on the PS4.
Man, do I feel you here! – This is what had me split between Unity and Unreal at the very beginning. Unreal had all the tools built-in with basic, albeit crappy, interfaces (but it was ultimately still a beast to run the games), and Unity was just overall more flexible/faster to implement things in, despite not having the tools (and I could run games it made easily), so I opted to either make (or buy, if I didn’t want to or know how to make) the tools I needed. The interface and UX was 1000x more important to me than the functionality of the tool itself, and Unreal didn’t impress me on this front.
That being said – I think this is what drives me to push so hard at Unity to do better!
It has so much potential – yet there are so many flaws in its implementations!
This is understandable – I’m not sure your level of knowledge on game engines, but it sounds like Unity is your first experience. However, my situation is different. I’ve worked with many engines – though I’ve only taken two seriously.
I never wanted to be “that guy” who bitches and complains about everything that is not up to his “standards” though, and yet, at the same time, I realize that I have no right to complain after the fact, when Unity inevitably misses the mark without my input. So rather than simply keeping my mouth shut and passively letting Unity become a pile of shit that can’t be cleaned up, I’d rather state my thoughts / feelings – and hear (or not) the development team’s response – whether I like that response or not – so I can further address any misunderstandings, on either side. They aren’t game developers after all – there are plenty of nuances they either don’t fully grasp or understand. On the flipside, I AM a tool developer though, and I know how tools – especially for artists/designers and even games – should be made. So I feel obligated to speak up a little more than the average person due to my experience alone.
Some (light) backstory about my journey if you’re interested, lol:
Way before Unity, GameMaker caught my eye (way back in the 90’s) by its promise of “Easy Game Development!”. After studying games for YEARS on my own trying to work out how they were made under the hood by using things like a Game Genie and Game Shark on various consoles before I ever even knew it was possible to program a game outside of C++ or Assembly, I was already skeptical of the “easy” part that was promised. However, since it was primarily a 2d engine at the time programmed in Delphi, I didn’t expect much since everything else that existed at the time was extremely limited / overly-specific to a single game genre. The game I wanted to make, however, combined 2d fighting games with RPG and online multiplayer. After giving it a try and getting nearly 100% through my project, the experience surprised me at how flexible and intuitive game development could actually be. The major reason why I didn’t make it all the way through my project was because the technology was just too slow, and because the author sold his technology to some yoyos who thought the future of games were tile-based, bitmap-driven, 2d platformers. So yeah. After working with the program for years developing many vertical-slice prototypes, I changed engines.
Unity’s base workflow (compared to Unreal and others) was the closest to GameMaker’s I could find in my years of searching (gameobjects were simply “objects”) – but it was lightning fast compared to what I was used to. However, the more I dug into Unity, the more I realized it had a hugely steep learning-curve if I wanted to achieve feature-parity with what I had achieved in GameMaker. And in some cases, some of the things I wanted to do (that I had already achieved in GameMaker) simply weren’t even possible in Unity at the time (i.e. loading external resources at runtime, for example, which was core to my gameplay). 2d was also in its infancy in Unity, so it was up to me to rewrite all of the systems GameMaker handled for me in Unity – which was harder than it sounds, since Unity left a lot to be desired with 2d motions/collisions. So as I awaited 2d to improve, I went on to learn other bits of the engine to play around with what I already knew about 3d (while also studying other game engines – including both 2d and 3d, both independently developed and corporate products, etc.) In the process, I realized there weren’t many engines with a lot of promise, just Unity and Unreal – and Unreal was too heavy. Unity, on the other hand, was too unpredictable in its API (plus everything was hidden in a black-box nobody could get to; so when physics was broken – it was really broken). The crazy thing is, Unreal had all the features I needed (except 2d) – but Unity had the better (more-flexible) workflows and an easy way to develop your own tools. So in the end, with no other decent alternatives – I opted to stick with Unity and push it to be a better engine. This was the only way I could see (as a non-C++ master) getting a game engine that had the best features of all engines – in one package. This was my dream – and still is.
PS:
To be clear – I am not saying GameMaker was a silver bullet. It still had plenty of issues (even UX issues) at its core.
Speed was the most crippling issue of all though.
This was partially because it was TOO flexible. That is, it made EVERYTHING unique 100%. It didn’t allow sweeping changes to code or behaviors unless everything was written as a script. Scripts were not instances though – they were variables – with string execution. This was horrible for both memory and speed (not to mention code security). Almost everything was string-based. It was great for prototyping ideas with flexible code. It had a simple-to-understand UX too – but GameMaker couldn’t make a full-featured game (much less a multi-platform game) “easily” to save its life. It was just too inefficient. Though, compared to C++ or Assembly, I guess one could argue. The core wasn’t designed for anything ambitious at all. To be fair though, the author was a Professor at a University who taught game design. He was just one person who was working on this in his spare time for his students, so you really couldn’t expect much. That said, you could load/unload resources later on (he was great about implementing user requests, and that did help), but this was entirely too much work for such a core feature as loading/unloading objects/resources. So there were flaws aplenty. However, for what it was – a tool for game prototyping – it was great and straightforward, and was a prototype authoring tool that I remember fondly.
This is something I’m hyper-focused on.
I can’t wait for decades to build a proper game development toolchain either.
However, Unity (with the help of some third-party tools) is actually really close to this already.
VisualScripting is the last step of the journey for me as far as intuitive major tools that I prefer not to build alone. Once I have a solid option for coding in Unity that is flexible and scalable (and finally enjoyable to work with), I will have gotten to the point of needing nothing more from Unity that I can’t do in external packages (that translate right into the editor). Houdini and Blender have provided me that level of tooling I need in terms of art and level design. Akeytsu (Houdini) and Animation Rigging (and eventually DOTS animation) would do it for me for Animation. UI Builder has nearly fixed my issues with UI. Only Visual codebase authoring is left, and I have provided detailed feedback to (and am in contact with) the team that develops this portion of Unity. While I’m not particularly pleased with the direction they’ve taken with VS so suddenly, I’ve made it known (very clearly) why that is the case. They have definitely pivoted to some extent from their original direction thanks to my feedback (and the help of others who also feel the same way), so it looks like they’re going to deliver something decent. I’m still fighting right now to ensure they have a full-grasp of what a system like this would look like in the end, but I think my work is nearly done on this front. Beyond this, you have your “foundational knowledge” to create any kind of game you want to – and the tools to do that intuitively – all without Unity’s future involvement. Hobbyists might have to wait longer, but serious designers will soon have what they need.
Speaking of Dreams –
Dreams on the PS4 is something the Product Managers at Unity pointed out to me nearly 6 months ago that they’d like to achieve, but perhaps with windows. They are definitely looking into this on some level. However, even some of these guys are frustrated that Unity is, for example, still looking at heightmap-based terrains for their “Environment System” instead of actual meshes, as well as other 2003-era plans. The reason for such slow software-development is a lot of internal hemming and hawing about software direction. Unity saves a lot of money on R&D by eyeballing the open-source community and technology whitepapers, rather than having a director for their overall technology. UnrealEngine on the other hand clearly has this director, but isn’t always rewriting their whole base of code to be more transparent and modular like Unity is, and as such, with Unity, a lot of parts need to fit back together (with better design/performance) which clearly needs some thought and foresight.
That being said – Dreams is a higher-level development environment. However, it exploits VR and motion controls for “artist-friendly” development. To its detriment, however, I doubt some of those controls are as intuitive as they seem to be to some artists though. As a result, a development environment like this needs some serious considerations behind its vision. The biggest plus with Dreams is that they had a guy behind the system who is an artist himself. This guy had enough intuition to tell them to ditch the windows-based “dialogue-box” interfaces for context-sensitive actions (for example) – which is something very few tools actually do – (except really REALLY good ones). But this is a simple UX trick, and nothing more. A tool like Unity needs hundreds (or thousands!) of these. Though, Unity, with the right backend technology, can (very quickly) have an interface like this – if they choose to hire the right UX designer.
All in all – it’s not a gamble (anymore). Unity has become a powerhouse right under our noses. Just learn some basics across all elements of a game technology (i.e. shaders, animation technology / principles, meshes/materials and some basic optimization, and how to read code) and third-party tools to help you with the content-development. Once a proper Visual Scripting solution appears, the rest should fall into place – if you’ve got a clear idea of the design you’re after of course. I am planning to make this process much easier for everyone however.
This is the key – and is exactly why, to expand on my previous paragraph, I actually plan to create a start-to-finish “learning path” and “tool pipeline” setup to share the most intuitive tools with the most optimal learning paths with the designer community. At the moment, I am just waiting to see Unity’s new VS solution. If I can simply write a small plugin to improve the intuitiveness of writing code/tools for game design – i.e. Freeform Animation authoring – this will be the cherry on top to intuitive game design. So don’t worry too much. I plan to really help anyone who wants to learn the basics of game development technology quickly – showing them the most intuitive tools available to them at the time – right at their fingertips.
If I cannot accomplish my vision of a badass visual scripting tool within the next year and a half (with Unity’s VS tool – and I don’t mean Bolt 1), I will write my own visual scripting tool to supplement the intuitive scripting and animation processes Unity lacks. None of these tools will take long to build, as there are plenty of other tools to be used as my foundation, if necessary. In the meantime – keep learning – and don’t give up!
Unity is worthwhile to know – just stick to the parts you know for now, and know that the future is not far away.
Hope for an intuitive development process is right around the corner!
Talking about Unreal. I got a new 1TB SSD so finally have space for Unreal. So far, I’m kind of kind of kicking myself I didn’t do it 2 years ago. A big issue for me as a beginner was getting trees and foliage into the game. Unity was a real pain to do this in with the renderpipelines. Even the split between HDRP and URP is just a complication that so far Unreal doesnt have. It seems so easy to work with on the surface.
I thought I’d give Unreal 7 full days. I gave up and came back to Unity a day or 2 early, just to be more familiar and do actual stuff. But the trees I got in the recent bundle are not playing nice again with the render pipelines. It’s probably my fault, I’m using the latest beta. I don’t know. Seems unreal is really good at justputting everything infront of you rather than sending you on another journey of discovery…
I’m kind of frustrated because I’ve learnt many parts of many systems in Unity, and I don’t really want to give that, but I guess I’ll leave it to which ever can draw me in more. The prospect of making epic landscapes with no renderpipeline issues, less asset hunting, etc, is very appealing.
You said you use Houdini. I also downloaded that once but decided to focus on Unity instead. How does it fit into your workflow? I’m interested to try it but I’m fearing my focus is getting to spread out. As a solo dev I think I should really get the whole tool chain sorted for a simple game. But just out of curiosity I wonder how people use it, becasue it seems very powerful.
In defense of Unity though, now that I have booted into URP, It’s nice to have something run smooth on a low end machine. And the interface is very nice and orderly, and subdued. And obviously I’ve spent a lot of time with it and am familiar with how things operate, and sometimes don’t. And the 2D tools are great. Spriteshape etc.
Bolt also…it’s just nice looking, and shader graph, and VFX graph. I think visually, Unity is much more pleasing. I don’t know how much that matters to me, but I think it might.
So, I hope, now they have money from their IPO, they really match the content delivery of Unreal. Otherwise, I think they might be losing many developers who don’t really want to spend money on assets up front.
I think perhaps the game has changed. Unreal is legit providing a library of world building assets, which to us means we don’t have to budget for those things, don’t have to worry about compatibilities, etc. I don’t know what Unity is doing with thier photogrammetry purchase, but they better implement a workflow that matches Unreals for HDRP if they want to compete in that space.
As somebody who has used A LOT of game engines over my 20+ years making toolchains and pipelines to make all kinds of different game styles, I have found that most game engines (except Unity) focus on getting something simple in front of the user, and some basic (intuitive) tools to operate on that simple thing in a simple way.
However, the moment this fails is the moment you need to go deeper than the “simple” tools allow – and it is precisely THIS moment when “simple” is really tested in most engines.
Unreal requires you to get into Blueprints almost right off the bat. Like Houdini (which you mentioned), you have a hell of a lot of nodes – and you must already understand them (and how to use them efficiently) before you even begin. This is a LOT to ask of a new user, but those devoted enough can make some headway fairly quickly. The problem arises when these nodes require you to know more stuff about game development more generally than you’re actually ready for (and the underlying structure of things like shaders, models, etc. – creeps up on you later on as an invisible requirement of knowledge). Unity’s “workflow” pretty much requires you to face much of this stuff off the bat (since it is necessary to make anything), while Unreal’s workflow lets you ease in on this (since it pretty much handles shaders, rendering, etc. for you until you’re ready to take control of this stuff – but it tends to botch things up on a performance level for everything except for high-end games). While this handholding sounds great – in theory – the very fact that it obfuscates these things from you at first keeps you from genuinely “getting stuff done” since (whether in Unity or Unreal) you still need to understand what you’re actually doing under the hood (i.e. with shaders, models, memory, etc). With this knowledge, jumping ship from Unity gives you a head-start in Unreal (since it seems to be a lot easier to understand thanks to Unity’s pain points), but as with anything “different” enough, new pain points tend to arise when you have to find out how to “unwrap” Unreal’s pretty package and start to scratch at what’s under the surface when you’re ready to get a bit serious.
All in all – “on the surface” is really what it’s all about with most game engines and beginners. The moment you have to let go of the hand that guides you, things get complicated – fast. You’re like a scared child in a forest of syntax and C++ becomes the wolf that is staring you down. Unity, in contrast, gets you into its (heavily unorganized and extremely complex) “guts” pretty much immediately, and with a little battle experience, something like C++ doesn’t seem like such a huge challenge – just a bit unnecessary, considering the tools you’ve already got to work with (like DOTS and ECS for performance and render pipelines made to specifically target either low or high-end hardware).
On its surface, Unity seems more complicated (its UX is definitely needlessly so sometimes), but it is clear about the problems of performance in that it leaves a lot up to you to handle your design – and whether that’s based on performance of ease-of-use. The C++ world that Unreal is silently leading you toward without ever telling you performance problems exist is more troublesome to me in the longrun than a terrible UX. If a false sense of comfort is what you want, Unreal is better. Though even simple games run like crap on lower-tier hardware. Optimization in C++ is often necessary, as Blueprints tend to be useful only as prototype architecture. Unity at least doesn’t lie to you with the technology itself. It is only partially ready. Unreal, on the other hand, makes you believe it is the full package – until it isn’t. But by that time, you’ve already invested so much time/energy/money into getting things to where you want them to be – you simply cannot go back. This is true in more ways than one. UE4 is a closed ecosystem after all. At least with Unity you have an out: All your assets are yours. You can take them wherever you want – even to Unreal. While it is nice overall to have tons of photogrammetry worldbuilding assets at your disposal in UE4 for “free” – it does seem everything has a cost. In my experience, it is extremely important not to overlook that cost early on.
While this is a smart move in theory – in practice, I’ve found that working just one step above what you’re comfortable with in complexity is the only way to ensure your development methods will grow with you. A “simple” game tends to be the suggestion to those who want to go MMORPG (who clearly won’t last long in gamedev) – but a proper toolchain and pipeline may not even be necessary with some “simple” games, and might end up building false hope that things can be expanded upon later. In practice, this is often not the case. Generally, the most “simple” things get thrown out, not reused. So keep this in mind.
I’m not saying be ambitious – but I am saying do some “stretches” first before you do the exercise, and you’re less likely to pull a muscle later, as things are already more flexible before you start. This metaphor applies to your tool chain especially. Stretch it first – then use that “stretched” version in production for your “simple” game. You’ll find it serves you better in both limiting yourself, and fitting your exact needs – simultaneously.
This is a good question – Houdini is great at all kinds of things in Unity (especially geometry/texturing), but its new thing is going to be “animation” too – i.e. rigging and whatnot – which means it will fit in well with Freeform Procedural Animation in Unity. This is a new set of features I didn’t even know were coming, so I’m not certain on my eventual Unity setup just yet, but I’m positive this is just one more way Houdini can help my asset workflow.
To answer your question though – think of Houdini as another Unity with UX features that Unity itself tends to lack (i.e. object placement, scattering, booleans, hooks into sculpting apps like Blender or Zbrush, automatic texturing, LOD stuff, etc.) I use it and Blender for pretty much all modeling / texturing I need to do in the context of game design.
Without the right introduction, learning Houdini can be overwhelming. Most people who teach it come from VFX and film backgrounds. Few approach it from a gamedev point of view. Before you go down that path, I’ll get back with you on some training materials that might make that introduction a bit smoother / easier to grasp. Stay tuned.
This is absolutely true. I completely agree. I think, at some point, Unity already understands this on some level. Though I think the “Snaps” series of assets Unity has released is supposed to build into that process. I think that Unity will find out soon enough that you can’t beat “free” however. Enough people are jumping ship right now that I think they maybe will wake up to the reasoning behind it. Sadly, I’m not sure enough people at Unity really understand where the process is failing them. It isn’t photogrammetry, nor is it even Visual Scripting (as much as it would help them to innovate in this area). It is the fact that @ itself is being run by “in the trenches” programmers and “college grad” designers who don’t have battle-hardened design-programming experience (i.e. Houdini). This “hard” design stuff is generally left to an equally ignorant (in practical day-to-day development life of the average developer) marketing team. This is the perfect storm of what we developers don’t actually need (or want) – Including the AAA developers they cater to.
It honestly scares me that people are jumping ship to Unreal for art assets – because Unity is the better engine at the end of the day. However, if you can get a character in the game world and running around in less than 5 minutes from start-to-finish, that says a lot about the engine’s priorities as a whole. The problem is – how we all interpret those priorities tends to be a little harder to intuit when you don’t actually make games with your own engine. This is where Unity really falters compared to UE – and where people probably doubt @ the most. Sadly free assets wouldn’t save them – but it definitely couldn’t hurt to “repair” their image a little bit.
Well I am fully back in Unity now. It was a nice little distraction, I got to see the other side, and still have Unreal installed, but I think it’s time to refocus on unity, and gain some deeper understanding. Thanks for all the advice on that.
It’s actually very impressive how you can write all that out. Certainly when I’m focused on a topic I can write a lot, but you seem to be quite skilled at it. And it’s quality knowledge.
I tried to get into to Houdini a couple of years ago, but it was kind of alien to me, not being versed in coding or anything, but I think I might have a look at it again. Actually I think I might download it…later. New 3D software, in my limited experience, always send me back to Unity. Just the immense learning time. And I think maybe I should stick with Unity and then see where I’m limited in a way that I would need something external. But I’m always looking to gain that procedural edge, because it seems to be multiple times more powerful than regular work.
But I know, the way to learn it is to use it. And to use it means to not use Unity, and in the end I will make a little progress and return to Unity anyway. So maybe I should not do that. Lesson learned. But if ever you want to lay down some Houdini knowlegdge, I’m all ears.