Yep, Texture2D.SetPixels is too slow. So now what?

I’m trying to achieve a dynamic white shoreline/foam/splashing effect around objects in water, similar to the white foam seen in Monument Valley (relevant footage at 0:14).

I successfully got the look I was going for with Texture2D.SetPixels(), but, not surprisingly, I’m getting terrible frame rates by calling Texture2D.SetPixels() every frame.

My question is how can I achieve a similar effect without taking such a big performance hit?

(animated GIF: Imgur: The magic of the Internet)

My water is a simple plane object (GameObject > 3D Object > Plane) with a flat blue texture on it. I’m modifying its vertices on the y-axis every frame to create a wave effect, and I have a simple cube (which can move around) standing in the water.

The white foam around the cube is a simple particle system (custom written) that’s updated every frame in Update() and drawn directly onto the blue Texture2D of the plane with Texture2D.SetPixels() and sent to the GPU with Texture2D.Apply(). This creates the exactly the visual look I’m going for, but the performance is atrocious.

Help me, O’ Graphics Gods of Unity. Does anyone have any alternative ideas on how I could get a similar effect but maintain good performance? Obviously a normal Unity particle system would provide great performance, but Unity’s particle systems can’t “adhere” to the surface of a mesh (and they don’t have the correct perspective, nor do they wobble with the waves as seen in the image above). I’ve seen mention of “dynamic decals” and shaders and RenderTextures and shadow projectors for doing this type of thing, but those solutions are typically only mentioned as a single sentence solution without details or examples, so I’m having trouble searching around to find out how to use them to achieve this effect.

I have a little bit of experience with shaders, but I’m not sure how I can leverage them to get this effect. I understand that why what I’m doing is so expensive is because I’m sending a full texture (512x512 RGB, about 786KB in memory?) to the GPU every frame. Is there some better way to draw dynamic particles directly on the GPU? All I can think is that, since I have a very flat/simple look, I really only need a single bit per pixel (512x512=262144 bits=32KB) to tell the GPU whether the pixel should be the default blue or white. But I wouldn’t even know how to get started writing a pixel shader that could take in an array of bits and create the effect I’m going for. (And maybe that’s still too much data to send to the GPU every frame, anyway.)

Is using a shader even on the right track? Should I instead be looking into decals/RenderTextures/projectors? If so, how would I use them to create this effect?

Not a solution but maybe something to think about.
In most AAA games played in the past they fake it with a deforming (or not) mesh directly above the water mesh. Completely believable and invisible to most gamers who aren’t game devs. This mesh has some kind of low resolution repeating texture that “foams” animates as the water line moves up the shoreline.
Several I remember distinctively because I inspected the water in detail while playing the games Bully, Red Dead Redemption and Oblivion. Although two are from the same studio I think this is the defacto go-to way of getting shoreline graphics to work, at least it used to be.
Having played Skyrim recently, and reading your post, I’m interested in how its visual solution is setup. I’d expect this workflow to still be used, though shader detail has probably become optimized enough to where animated fake edge meshes might not be how it is done now.
Since Skyrim has such a vast amount of water I will check it out and update what I find.

RenderTextures, yes. I would use a compute shader to draw in the sections. Such as, iterate across a compute shader to modify the pixel colors through a certain bounding area.

Scrawk has a lot of information on that;
http://scrawkblog.com/

Check out his directcompute sections.

Though of course, you could also always Blit, which would probably be faster. Same concept, basically, doing a Graphics.Blit(texture, texture, material). It would essentially draw across and output to the second texture, using the first as _MainTex in the shader. Using that would allow you to draw easily. That’s the quickest. The basic process you want to do is update the texture on the graphics card, not on the CPU.

2 Likes

For reference, if you don’t know SetPixels32 using ARGB32 is significantly faster than set pixels and you generally would want the texture to be a power of 2.

However, it still likely would be too slow for this effect if it needs to be repeated for multiple objects or the resolution is very high.

I am curious about blitting on render textures. Would blitting 512 by 512 pixels onto a render texture be faster than SetPixel32 with a texture size of 512 by 512? I guess I’ll give that a try.

The easiest way would simply be to have vertex painted ripple mesh. the bit around the pillar is vert painted white. This would be immeasurable in performance cost, just a mere blip, and look identical to image above.

1 Like

What hippocoder said, but in Monument Valley the foam outline reacts to the structure as it emerges from the water so can’t paint the color directly. However, if you look closely, the structure is roughly pyramid-shaped so the color of a vertex can switch to white as soon as it passes a below a certain height threshold. You can calculate this height threshold and store it in the mesh color.

What you need to do is a whole bunch of raycasts from very high above to essentially calculate a per-vertex heightfield of your structure, and bake those values into your water mesh vertex color. You can do this in the editor, or on level load if you have dynamic levels. At runtime, the vertex shader calculates a “foam value” for the vertex (i.e. 1 or 0) using a simple threshold using the step() function.

The “foam value” will be interpolated across all the triangles for each pixel so you may need another step in the fragment (or surface) shader to snap to either 1 or 0.

I am almost entirely certain of this. SetPixels32 still causes an upload from a CPU buffer to a texture buffer. It’s quicker, but not quick. I believe it still iterates through, it just does direct casts as the 4 bytes rather than converting to the 4 bytes. Blitting a 512x512 texture is going to be almost unnoticeable on anything made in the last 3-4 years, unless it’s an extremely complex shader.

I’m not actually testing this, mind, but any case where it would be faster to do a SetPixels32 is few and far between if my guess is correct.

If going the texture route you can use point sprites to quickly fill a render texture, or better yet, unity sprites. And use that. Either should be a thousand percent faster than SetPixels

2 Likes