I have multiple prefabs which are snippets of a level which I then stitch together procedurally. I also have a SO sub asset in each prefab which has additional data about the level snipped like entrance locations for example which I use to chain them together. I manually edit the positions in the inspector but since I can’t draw gizmos while editing the SO I don’t get any visual feedback of where the position is.
Why use a ScriptableObject for this? I would just use empty GameObjects that are part of the Prefab’s hierarchy to mark locations. In fact I’m doing exactly that in a project I’m currently working on to achieve this effect:
I can but I was hoping SO had some gizmo features to simplify this. I’ve got tons more data in the SO so it makes perfect sense to edit the positions there as well.
You’d have to write some custom Editor script(s) I think.
I want to do the same thing with an SO I’m using. I just found out about the DrawGizmo attribute. I wonder if it works in ScriptableObjects?
Why not just try it out and see?
I was in the car parked at the grocery store at the time. I did mess around with it later and found that while I’m pretty sure you can put that attribute on a method anywhere such as in a Scriptable Object, the method must be static and have a Component type parameter which I believe is what MonoBehaviour inherits from and not Scriptable Object. That method will then be called for all instances of that component type in the scene. So unfortunately I don’t think the DrawGizmo attribute is the answer for rendering stuff for Scriptable Objects in the scene view. I suppose its use would be if you wanted to move your in-editor scene view gizmos rendering code out of MonoBehaviour game code and into a separate editor only code file. That way you can keep your game code and editor code more separate.
Hello! I know this is an old question, but I think it’s an important enough question to share my solution (after some researching on web).
There’s an editor class named Handles, that can do some basic gizmo that Gizmos can do, and it can be called in SceneView.duringSceneGui delegate. So if you create custom Editor for your ScriptableObject, you can hook to SceneView.duringSceneGui and then draw your gizmos.
A little bit tricky, but I really need to draw gizmos in ScirptableObject (it’s inside Timeline), so it deserves.
Here’s my sample code:
[CustomEditor(typeof(MyScriptableObject))]
public class MyScriptableObjectEditor : Editor
{
private void OnEnable()
{
SceneView.duringSceneGui += DrawGizmos;
}
private void OnDisable()
{
SceneView.duringSceneGui -= DrawGizmos;
}
private void DrawMyGizmos(SceneView sceneView)
{
// draw some Gizmos-like gizmo
Handles.DrawWireCube(Vector3.zero, Vector3.one * 2f);
}
}
Hi, I solved it with Unity actions. Barely tested.
using UnityEngine;
using UnityEngine.Events;
[ExecuteInEditMode]
public class ScriptableObjectRenderer : MonoBehaviour
{
UnityAction drawGizmosAction;
ScriptableObjectDrawingGizmo visualScriptableObject;
void OnEnable()
{
if (visualScriptableObject is null)
{
visualScriptableObject = ScriptableObject.CreateInstance<ScriptableObjectDrawingGizmo>();
drawGizmosAction += visualScriptableObject.DrawGizmo();
}
}
void OnDrawGizmos()
{
drawGizmosAction?.Invoke();
}
}
public class ScriptableObjectDrawingGizmo : ScriptableObject
{
public Vector3 position { get; set; }
public UnityAction DrawGizmo()
{
return new UnityAction(() =>
{
Gizmos.color = Color.yellow;
Gizmos.DrawSphere(position, 0.05f);
});
}
}
What’s the leverage in using UnityActions? Am I missing something? The regular Action should do.
They’re identical. No advantage to using either one.