Drive a world-space location point from a sprite / Get world-space loaction of a certain color pixel

I’m working on a 2D game with sprites.
I have this idea, to have an animated sprite drive a position of some other gameObject within the scene.
I really don’t want to deal with manual sprite animation synchronization with multiple other simultaneous unity animations.

So I wonder if the follwoing is possible/feasible?

  1. Have a certain/special color pixel within the sprite.
  2. Have the shader detect and somehow report the location of this pixel. (<- this is step I’m currently struggling with)
  3. Have a script get the worldspace position of said pixel and reposition the needed gameobject to given location.

This way, I could drive the position of other objects purely via sprite frame animation without the need to create other animations in Unity. And later changing the movement would be super easy - I would only need to change the visual sprite.

So is this approach feasible? Is it somehow possible to get a numerical variable from a shader? Or is there some other way to get the location of a specific pixel?

Example.
Let’s say I have the following sprite sheet for an animation.
I have chosen a special color (magenta in this case) for my special sprite pixel.
So now I want to somehow get the location of this pixel in every frame and use this data in a script (e.g. to position some other gameobject).

I have tinkered with custom shader code before, so I have some idea of how it all works, but I don’t know if and how it’s possible to read a variable (pixel location) from a shader.

yes, in shader you can do something like append buffer (if pixel is certain color),
then read those values with Unity - Scripting API: AsyncGPUReadbackRequest or so.

*ill try to find my old test about it, tried to use it for tracking laser pointer on webcam image… (or maybe can find similar examples online too)

Yeah, maybe, maybe not.

Because you’re dealing with an animation, it’s not just going to be a single pixel in a single frame that you wish to alter. So you’d be removing and placing that pixel for several frames in an external program. That’s at the very least not intuitive as you have to look at a single pixel and don’t see the full thing in motion until after you import. And then it’s not right and you keep going back and forth.

Whereas if you came up with some simple system like an offset from the sprite pivot plus roate in a circle or sine-wave motion or following a spline, you can parameterize this and edit (experiment with it) in realtime even during playmode until it’s just perfect. Nothing beats tweaking the final result in realtime!

The result of that will far more quickly lead to good results, rather than the back and forth of editing keyframe pixels. Assuming you were just going to use a spline for animation, you can finish that task in under an hour while I suppose you already sunk way more than this into this sort of tacked-on functionality.

I suppose whatever follows the pixel is also going to cover up that pixel? If you ever were to need the character without the hovering thing, you’d also have an odd pixel to deal with.

So personally, I’d advise against going in that direction. :wink:

Thank you for the response!
I’ll look into that method. Also, if you manage to find your old code examples, I’d gladly look at that too. :slight_smile:

Thank you for the response!

Yeah, I understand the pros and cons of my desired approach.
But in my case, the “floating object” is very much tied to the “parent” sprite.
If I then decide to tweak the visual sprite a little or add some more frames, I would have to tweak the animations in Unity as well. And it’ll have to be done every time I change something.

For example if I decide to tie a Unity light to a torch that is carried by a character, imo it would be much easier to drive the light’s position from a marker in sprite, rather then try to match light’s movement manually in Unity.

Pixel being uncovered is also not an issue. Since the detection would be done in the shader, I can simply replace that pixel with any other color (e.g. color of neighboring pixels)

Found few snippets,

so the shader appends uv position into that buffer, if its correct HSV,
and then script blits screen with that shader and fetches data from GPU with GetData.
(these days theres async methods too)

probably better alternative method would be to pre-process those frames,
find pixel position and save to array once…

1 Like

Thank you, for now!

I did the shader part (compared colors and appended the i.uv into that buffer)

Now I’m still trying to wrap my head around the reading-from-buffer part.
Somehow it doesn’t work and the GetData writes an empty array.

I’ve never used compute shaders. Does it matter, that this is a regular shader?

that script uses void OnRenderImage,
which might not be available on URP/HDRP, if you happen to be on URP?
then need to create some custompass script, that calls the same logic… (but i dont know if the blitting part is different in URP too).

For testing, you could try using BIRP (builtin render pipeline).

and for debugging,
try adjusting the shader to return some red or white color, when it gets suitable pixel,
so you can see if that part works.

The more I do this, the more I understand how little I actually understand. :rofl:
One thing is to edit an exiting shader to add a custom sprite outline and fill, but this all seems on another level.

So after some digging and experimenting I was able to pass some data from shader to c# scirpts.
First I used RWStructuredBuffer instead of AppendStructuredBuffer.

RWStructuredBuffer dataBuffer : register(u1);

So then I was able to do something like this in the shader:

   if (DoColorsMatch(SampleSpriteTexture(i.uv).rgba, _Marker1Color, 0) == 1)
   {
        dataBuffer[0] = i.uv; 
   }

And then in the C#, I managed to read the buffer

    public Material material;
    ComputeBuffer compute_buffer;
    Vector2[] data;

    void Start()
    {
        data = new Vector2[1];
        compute_buffer = new ComputeBuffer(100000, sizeof(float) * 2, ComputeBufferType.Structured);
    }

    void Update()
    {
        Graphics.SetRandomWriteTarget(1, compute_buffer, true);
        compute_buffer.GetData(data);

        Debug.Log(data[0]);
    }

The surprise came when I figured, that I don’t really understand what those values represent. I thought i.uv is the pixel’s corrdinates in UV space.
I tried with an animation of 6 frames (with the special pixel being in the same place for all of them) and what I got was 6 different values with seemingly no relation to where the pixel was in regard to the rest of the sprite.
Also if I use dataBuffer[0] = (i.uv.x, i.uv.y);, I get completely different values.

So there’s still some figuring-out to be done on my part.

Also, this GetData approach seems to be quite tasking. Not sure how reliable Unity’s play mode is for performance testing (fps), but since implementing this, the indicated frame rate dropped from ~190fps to around 120fps.
Might want to look into those async methods later. But first I want to understand and get this to work without adding the extra async complexity.

yes, getdata is slow, there is getdata async,
the shader should run as full screen blit, so then uv is viewport UV value.

Also, yes, I tested the shader part by replacing that pixel’s color with some other color and that worked fine. The special pixel is detected correctly.

Now, to figure out everything else. :joy: