Interacting with Scene View picking

When you click in the Scene View to pick objects for the selection, how does Unity process that, and how do we add our own meshes to it?

In one case I want to draw a custom mesh gizmo for my object. I can draw it by using Graphics.DrawMesh from within OnDrawGizmos, but it doesn’t respond to clicks; I want clicking on the mesh to select the object. How can I do that?

In another case I want to use a skinned mesh as a gizmo. I can set the mesh up by creating a GO with appropriate HideFlags, and it’s visible, but again it’s not selectable.

I’ve actually been working on an editor extension with some complex interactions so I think I might be able to help or at least point you in the right direction.

The first key I found was the SceneView.onSceneGUIDelegate which lets you globally hook into the scene view process. I hook it in a static class (using [InitializeOnLoad] and a static constructor). When that gets invoked you can check things like Event.current and process it like the documentation talks about (calling Event.current.Use() if you want to suppress further processing of the event).

The next key is using HandleUtility.GUIPointToWorldRay(Event.current.mousePosition) to get a ray into the scene from the mouse position. With that you can use Physics.Raycast or any other method to hit test the scene. I check for a specific component but you could do any logic to respond to what you hit.

I think that should work for you based on what you’ve described. If you don’t have a mesh collider you’ll have to do your own raycast processing, but at least that should get you going in the right direction.

1 Like

Ah, I see…

That sounds like a reasonable approach for the basis of a workaround. I’m guessing it doesn’t play nicely with things like drag-rectangle selections?

edit: ah I guess it probably does if I implement all the rectangle hit-testing myself as well…

So is that really the only way to proceed or did anyone find a better solution since then? Raycasting seems pretty tedious, and if you don’t actually have collisions then how do you sort through geometry. If anyone cares to share a working example that’s be great.
It looks like Unity actually does pixel picking (as you cannot pick geometry with holes in the texture), the problem obviously is that when doing a DrawMesh there is nothing that makes the pixel-GameObject association.

Hello,

I’ve stumbled upon this exact issue a couple of days ago. My problem was that a lot of our “area triggers” needed a solid scene representation in the editor, so that the level designers can scale them and move them around, so the obvious easy solution was just to create prefabs with box meshes and some materials which had various “editor” textures so we could distinguish between various types of triggers quickly. However, this involved an extra tedious “cleanup” pass when those triggers were baked into the build, as we didn’t want to carry those meshes and materials along (we only needed the collider placement from each one).

So my solution was to “replicate” the look of those meshes using a static gizmo mesh provider class (like the one @ mentions) which creates and supplies a custom box that is drawn in the CustomEditor classes of those trigger scripts (using Graphics.DrawMesh with that box and the same material used by the old prefabs). The picking issue was solved by adding a OnDrawGizmos call to my trigger scripts that draws a transparent solid gizmo box on top of the mesh drawn by the CustomEditor). Given that Gizmos.DrawMesh can be called with the same mesh used by the Graphics.DrawMesh call, I reckon the picking issue can be solved for a lot more cases than mine.