Merging Two Images

I’ve been working on a game for a couple years now. I have a problem that has plagued me since the beginning and I’m probably going about it in the wrong way.

Little Background::

Originally I started the project in XNA and never could find a solution. I then used Unity3D and same issue when using textures and planes. Now I’m using the 2D sprite stuff and it’s still the same underlying issue.

I am NOT a game programmer, I just do this on the side on my free time. So using mumble jumble big words may confuse me lol

Objective:

I have two images per level that act as the background. Image A is originally shown. The user may then “Draw” a shape, using right angles on the image. Once the user is done the area in which was “Drawn” will now have the the overlay of Image B. What the user draws can be any size and have numerous points.

Problem:

When the user is drawing I create a list of Vector2s that corespond to each turn they took. From these list of points I am able to figure out which pixels were changed and which were not.

What I am doing to the background is literally replacing the pixels of Image A with Image B, then setting the pixels and calling Apply. I can make my code in which does the calculations to figure out which pixels need replacing to happen in .01 of a second. I can also make the SetPixels quicker by giving it a smaller area instead of the whole image size (as long as the user didn’t draw from corner to corner). However no matter what I do I can’t increase the speed of the Apply.

Possible Solutions:

Shader: A friend of mine suggested a “Shader” that was able to take in a list of points and somehow change the image to the new image. I have extremely limited knowledge of shaders and didn’t think this was what they were used for, or could even take parameters.

New Images: I didn’t like this solution but I may attempt it. Which is to calculate every single rectangle in the area in which was drawn. Create dozens of new images, place their positions properly and set their Z/Sort Order over the top of the background. However this is going to call a LOT of Applys, but may be quicker because the images are smaller?

Threaded: My original solution to the problem was to thread the background changing. At this time, this may have changed, the Apply function had cross threading and was not allowed in unity. So I was doing all my math on the thread and then jumping back to the main thread to still do SetPixels and Apply. This had little to no effect on my game lag since the math was the smallest delay compared to SetPixels and Apply.

Stages: I then tried to do the updates in “Stages” with a time factor. If anyone of the stages took longer then was allowed it would then do the next step on the next update call. Example: Max time = .05. My Math takes .03 seconds, it would then allow step 2 to happen which was setting the pixels in groups. If the time ever got passed .05 it would leave and start where it left off on the next update. After it was doing the set pixels in groups, it would then do the Apply. Once again the Apply would lag the game for a brief moment but its quite noticeable. This was the best thing I could come up with so far.

Examples:

Project: I have made an example project which is not optimized and doesn’t show what my math is doing at all. However it does show the Apply delay, and the overall goal. See Attached Zip.

See Attached Image: This image is what someone could draw. The possibilities are endless really so I can’t prefigure out what could be. Image one is orange with circles and image two is just solid green.

YouTube QIX: A game that does something very similar is a game by the name of QIX. Which was a game from late 80s early 90s?

1

The example video doesn’t do anything crazy, mostly just boxes but you could make “Stairs”, or crazy shapes like in the example image.

Code: This is a cut out of code from the zip. Again just a very quick example of what’s going on.

for(int intY = 120; intY < 1000; intY++)
{
 for(int intX = 0; intX < 1000; intX++)
 {
  if(intY > 120 && intY < 1000 && intX < 1000)
  {
   m_clrA[intX + (intY * 4096)] = m_clrB[intX + (intY * 4096)];
  }
 }
}

texFinal.SetPixels(m_clrA);

texFinal.Apply();

m_SpriteRenderer.sprite = Sprite.Create (texFinal, new Rect(0,0, 4096, 4096), new Vector2(0.5f, 0.5f));

Any help would be greatly appreciated. There must be someway to get this to work, since someone was able to do it 20+ years ago :slight_smile:

A native app for most platforms could do this without trouble. The issue is with Unity’s support for manipulating images on the pixel level and pushing the result to the graphics card. I’m sure there are solid technical reasons for why this is so slow. I have a couple of untested ideas for you.

Idea #1: you can take your array of Vector2s and manipulate a mesh. You can then set the UV coordinates to display only the part of the overlay image that are inside the array of coordinates. So you really have two images, one on a Quad and another on a custom mesh just in front of the Quad. This should be very fast. For constructing the mesh, see the wiki script Trangulator.

Idea #2: there are a number of shaders that take masks. Or with a bit of work you can write a shader that takes a mask. That is, your shader gets three images, the original image, the one to overaly and a mask to show what should be masked in. Your code will construct the mask. You potentially get performance benefits because you can construct an Alpha8 texture. In addition, you may be able to get by with a mask of lower resolution than your original two images. This means that the Apply() is moving far fewer pixels to the graphics card. This is untested conjecture on my part, but given a shader that supports a mask, it should be quick to test.

For method #2, maybe a formal Mask would work (I’ve never used them.) But the mask/alpha/blend third texture can be just regular. The main trick is, even combined textures do not need to be the same size. The “blend” texture can be as small as you need. Same idea as painting terrain textures.

This is what I would do (using a shader):

When the user creates a set of points, I would add those points to a mesh as individual triangles (You could also use immediate mode, but a mesh will be faster). The texture coordinate of each vertex would be set to the relative coordinate on the background. This mesh can then be rendered with the foreground image as a texture in it’s material and give the results you want.

The most expensive part of this should be creating and uploading the mesh, which is by a huge factor less expensive than uploading a texture. (I just realized @robertbu already suggested this… oh well)