For now, we’re calling it proto-1. We’ll be sharing unity specific details on this thread, as well as more general discussions on TIGSource here, and our blog here. For anyone who is interested, please share your thoughts. Links to payable builds coming soon!
hi. As long as i understand, you can’t control the character directly. If so, it’s really cool. I’ve had the similar idea for an infinite runner.
Maybe my plugin would be helpful for your explosions(it will be free, so it’s not an advertisment)))
You can control the character directly, but it’s within the context of the physics engine. So, telling the character to move right applies a physics force in the right direction. At the end of the day, the goal is tight controls, but within the context of the physics engine, so that realistic dynamics effect every part of the gameplay (character movement included).
Thought I’d share our implementation of a simple sandbox tool I created early on (well, it’s still very early) in development. It’s great for testing out scenarios and scene changes on the fly, from right within a running build of the game. First here’s a little clip:
The main piece of code is actually quite simple. It’s called ObjectPlacer, and it does just that:
using UnityEngine;
public class ObjectPlacer : MonoBehaviour
{
/// <summary>
/// Possible prefabs to be selected randomly
/// </summary>
public GameObject[] prefabs;
GameObject current;
void Update ()
{
if (prefabs.Length < 1) {
return;
}
// recreate a prefab to place if there is none
if (current == null) {
ReCreate ();
}
current.transform.position = transform.position;
if (Input.GetMouseButtonDown (0)) {
PlaceObject (current);
}
}
public void ReCreate ()
{
// handle when Unity UI calls methods in edit mode
if (!Application.isPlaying || !enabled || !gameObject.activeInHierarchy) {
return;
}
if (current != null) {
Destroy (current);
}
current = CreateObject (prefabs [Random.Range (0, prefabs.Length)]);
}
virtual protected void PlaceObject (GameObject go)
{
// let go of current object, setting it's position
// and signaling for a new one to be created
current = null;
}
virtual protected GameObject CreateObject (GameObject fromPrefab)
{
return Instantiate (fromPrefab) as GameObject;
}
void OnDisable ()
{
if (current != null) {
Destroy (current);
}
}
}
For each type of object/prefab you want to be able to place in the sceen, you’d have an ObjectPlacer object in the hierarchy which points to that prefab (or multiple to pull randomly):
All the Unity UI does is set which ObjectPlacer is active/inactive on button toggle:
One important note. You want to make sure that object placer’s aren’t active while hovering over buttons. Otherwise, anytime you select a new button, it will create any existing selected object at the location of that button. After some fumbling with the Rect class, I realized a much simpler way to achieve this. The new Unity UI automatically swallows events based on the GraphicRaycaster behaviour which is attached to the canvas. All we need to do is create a “placement area” panel/image behind the rest of the gui which turns the sandbox tools’ parent on/off on pointer enter/exit. Though it is fully transparent, the image component is required to trigger the enter/exit events:
Here’s a first look at the door that transports the player between it’s cozy home base and the big bad game world.
The idea is that between levels the player will be able to travel to his/her safe-home, wherein powerups, customizations, and other fun stuff can be configured.
I’m digging the Geometry Wars stripped down graphics (proper name for those is ? ) and the ticks are looking awesomely creepy. I will totally provide detailed feedback once we have a web demo.
I do have a question for OP and I’m sorry for watering down the thread, but OP stated: So, telling the character to move right applies a physics force in the right direction.
I’m working on a top-down game that is not really physics based but I’m still using addForce and stuff like that to move the characters, projectiles and stuff around. Is that somehow bad practices if a game is not to be ‘physics-based’ ? Thanks guys
I wouldn’t say it’s bad practice, as long as you’re happy with the results. Most of the time, with character controllers especially, you’ll see people set rigidbody velocity directly. Doing it that way gives somewhat more predicable results (you specify just how fast and in what direction the character moves). In my case, I wanted that degree of unpredictability, so that interesting physics interactions could bubble up immergently. Like in this early test where the player grabs a flying enemy and drags it to the ground. The only part of that interaction that was coded was the grab:
Thanks, I’ll definitely try to post about the AI in the next week. I’ll include a gif of the load test I did with a whole bunch of those little creepers on the screen at once.
I had begun using a simple state machine codebase from a previous project, but decided to try using Mecanim as a generic state machine. Having a visual state tool in the editor is a big aid to tackling complex AI. After seeing the announcement that Unity 5 will support state machine behaviours, I thought it would be nice to implement a wrapper solution in the meantime, which would not only allow the use of Mecanim state behaviours, but also easy migration when Unity 5 becomes available.
The main class is call “MecanimWrapper”. It associates mecanim states with Unity behaviours, and sets those behaviours enabled based on the active mecanim state.
You can see that the names of the state behaviours listed in the Mecanim Wrapper match those seen in the screenshot of the Mecanim Animator state machine. So, when the state machine idle state starts, the AiGroundIdle script is enabled. When the state switches to chase, the AiGroundIdle script is disabled and the AiGroundChase script is enabled. All the while, the Animator windows gives a clear view of which state is active. Very handy for debugging and visualizing AI.
So here’s really the only thing you need to try it out yourself, the MecanimWrapper class (AiMecanimWrapper above is a basic extension of that class).
using UnityEngine;
using System.Collections.Generic;
public class MecanimWrapper : MonoBehaviour
{
public Animator animator;
public StateBehaviour[] stateBehaviours;
static int CURRENT_STATE_TIME = Animator.StringToHash ("currentStateTime");
Dictionary<int, Behaviour[]> behaviourCache;
int currentState;
float _currentStateTime;
float currentStateTime {
get {
return _currentStateTime;
}
set {
_currentStateTime = value;
animator.SetFloat (CURRENT_STATE_TIME, _currentStateTime);
}
}
void Start ()
{
behaviourCache = new Dictionary<int, Behaviour[]> ();
foreach (StateBehaviour item in stateBehaviours) {
int nameHash = Animator.StringToHash (item.layer + "." + item.state);
behaviourCache.Add (nameHash, item.behaviours);
SetBehavioursEnabled (item.behaviours, false);
}
}
void Update ()
{
currentStateTime += Time.deltaTime;
int state = animator.GetCurrentAnimatorStateInfo (0).nameHash;
if (state != currentState) {
ChangeState (state);
}
}
void ChangeState (int toState)
{
if (behaviourCache.ContainsKey (currentState)) {
SetBehavioursEnabled (behaviourCache [currentState], false);
}
if (behaviourCache.ContainsKey (toState)) {
SetBehavioursEnabled (behaviourCache [toState], true);
}
currentState = toState;
currentStateTime = 0f;
}
void SetBehavioursEnabled (Behaviour[] behaviours, bool enabled)
{
foreach (Behaviour behaviour in behaviours) {
behaviour.enabled = enabled;
}
}
[System.Serializable]
public class StateBehaviour
{
public string state;
public string layer = "Base Layer";
public Behaviour[] behaviours;
}
}
One important note: In these state machines, I’m using a transition time of 0. I’m not certain if states overlap during a transition with time > 0, so keep that in mind when creating your mecanim state machines.