I’ve made this thread to continue discussion of @laurentlavigne 's suggestions about slimmer API for iteration speed, as the release discussions get locked when a new release comes in and I’d like to understand this better.
Latest discussion and my current reply below:
I fully agree with these 2 statements:
Iteration speed is hugely important during iteration and optimisation can happen after.
Boilerplate and verbosity are a pain.
This is why we generate the boilerplate and verbose part for you automatically. Personally, I find the most painful parts with Unity script editing is when you first create a new script, wait for Unity to domain reload, then open the script, start making your edits, figure out all the boilerplate and get it done, then go back to Unity and have another domain reload. In my opinion: the way we directly generate all that boilerplate for you and open the script eliminates a lot of that pain.
Can you help me understand where this is lacking, or is it just purely the fact that one could add behavior code anywhere? Because, correct me if I’m wrong, you’ll still need to go back into the graph in order to insert your newly added action and add it to the flow.
I can see the value in adding a slim API for programmer sugar, but:
I think that removes the designer’s ability to edit the high level nodes and graph, which then brings back additional dependency of the designer on the programmer and reduces velocity again.
This brings a much bigger challenge in terms of data which you weren’t showing in your sample, i.e. now you need to pass the blackboard reference so you can query the variables, manage your own state and local variables somehow, etc. Which I think your short code snippets don’t address.
I’m happy to discuss this and see if I’m missing something and if we can potentially create an extension that satisfies that (although it won’t be a priority at this time as we have a lot in the works already), I just need to fully understand it and believe it’s the right decision and direction to take before we can commit.
Just to add a different perspective, this is how it works for me:
Create script in IDE (using IDE provided templates for MB, SO, Tests, EditorWindow, etc)
Make basic edits (Autocompletion, Templates and AI help add boilerplate code quickly)
Go to Unity, wait for domain reload
There only needs to be one domain reload and the boilerplate stuff is really a matter of using a decent IDE and its features.
I practically never create scripts in Unity Editor because it’s so limited and painful with the extra compile (if you’re quick you can actually double-click the script before compilation occurs).
I didn’t follow this discussion. But if it’s worth anything, I’d always root for full programmer access even for design-time tools. Because:
This is a reality in professional environments. The designers will most commonly have to rely on programmers to add game-specific nodes for visual editing tools, while the programmers want to make sure they can later optimize the designer’s node graphs.
But honestly I don’t fully understand what you’re discussing here or the implications of it.
To me it appears to be the difference between retained mode (object oriented, creating objects that represent things in a behavior) and immediate mode (functional) programming where you don’t.
Both have their merits and it’s often a discussion when it comes to UI library design for example. I think essentially the OP is saying could they not just write one function that handles the logic for their node. Why do they need to write a new class that has Start/Update/End etc.
I think this approach is fine for some things but in the case of Behavior there is a minimum level of functionality required for actions which are supplied by our base classes and that can’t really be communicated through a single function definition. It also wouldn’t work with serialization, or would be sunstantially harder to set up.
I could be well off here but that’s how it reads to me.
You generate it but it’s still scaffolding. I think I understand where our understanding diverges.
You see scaffolding as a pain point during generation, I see scaffolding as a cognitive overhead, think about it as visual noise.
BTW, many people use HotReload nowadays so there is no domain reload when changing code inside methods.
Yes anywhere and with minimal scaffolding, only logic that one would need anyway such as node entry and state.
And you’re that the back and forth isn’t avoided.
This isn’t what’s suggested. You keep the visualization since people like it (I prefer hierarchical as mentioned below), it’s just how the graph binds to our code that’s simpler with reflection & attribute.
Since we have our own thread to discuss this vast topic , I’ll offer a different perspective because often I find it easier to go super wide angle to get the thinking in-sync.
These comes from working on many projects where intelligence started as a complicated BT then had to be simplified.
Graph isn’t always the best way to visualize BT or FSM or even GOAP. When switching between graph and code there is a mental hiccup, it’s less with vertical hierarchy. Vertical hierarchy is also more concise so it can fit in the inspector and make debugging easier. Might be why major 3D apps switched from graph view to vertical hierarchy in the past decades XSI/Waveform/Softimage 3D. I use Panda for that reason.
Often time you don’t need BT, HFSM will do the work. In fact when emulating some intelligence in Unreal, you can do it in blueprint with time control nodes and state switches.
for exotic behavior GOAP is nicer because it’s a more concise way to describe solution space
I’ll be so heretic as to say that BT should be last resort. Start with HFSM and when static behavior space isn’t enough, skip BT to GOAP.
Now back to Behavior I give you a concrete example:
This was too slow for 100 agents on Quest 2.