# How do they do the painting in Splatoon?

I was watching my younger son play Splatoon yesterday, and it’s got me thinking. First, about what an amazingly good game design it is (but that’d be a topic for the Game Design forum). And second…

How do they do that?!

In case you’ve been living in a cave, Splatoon is a Nintendo third-person shooter where, instead of ordinary weapons, all your weapons distribute paint in various ways. One team is one color, the other team is a different color.

This paint (or “ink” in Splatoon parlance) splatters very satisfyingly, and also wraps neatly around corners and edges.

So I woke up wondering to myself how I would tackle something like that. Things I know how to do:

• Cast a ray onto a mesh and get the UV coordinates (as well as the 3D coordinates) of the point hit.
• Copy a source texture into a target Texture2D (though this is a bit expensive).

But those edges and corners are the tricky part. If I had to do it this moment, I guess I would cast a pair of rays at each of 4 points around the center of the splat, and from the point hit, calculate where (in UV space) the center and scale of the splat would be. For example, I cast a ray 1 unit above and right of the aim point, and another ray 0.9 units above and right, and from the two UV points hit, extrapolate 10 times further to get where the center would be. Then I copy in my splat texture, and repeat for the other three corners, only skipping repainting if they work out to the same center.

But I foresee problems. If my two rays happen to straddle an edge/seam which goes to different parts of UV space, I could end up drawing a giant splat that spans things it shouldn’t. And I suspect there are cases where edges wouldn’t line up as neatly as I want this way.

Another approach would be to, for each hit, draw a large number of very small circular blobs of paint. And simply iterate over the whole splat area, casting a ray for each small blob. But that’s a lot of ray-casting, and a lot of overdrawing in the texture, both of which seem like a Bad Thing for performance in a game where spraying paint everywhere is kind of the point.

Any other ideas? Is there some clever solution to this I haven’t thought of?

Check this out… not much info there but possibly contact them…

Edit: just checked the profile, last seen 1 year ago…but there is a little info about approach in the text…

1 Like

Keeping a low res texture in memory mapped like light bake UV’s and writing to it wouldn’t be that bad.

I’d probably go for setting vertex color with tri-planar projection just to avoid dealing with the seams.

In either case the splatter-ey-ness is just basic texture blend stuff.

I don’t think the texture can be too low-res… the splatters have very rounded edges. You wouldn’t want them to look pixely, even up close (since usually, “up close” is where they appear).

Can you elaborate on that? I don’t quite understand this technique.

A bit of googling did turn up this thread, but it’s light on detail. I gather that it’s somehow using the vertex colors to index into a texture map, but I don’t understand the “triplanar projection” part, nor why this is better than just using another UV channel.

But I have a feeling this may be the clever approach I’ve been missing!

OK, I found this explanation of triplanar mapping, which simply samples three textures based on the world coordinates of each point.

That’s a cool trick and one I’m glad to add to my toolbox. But I still don’t see quite how we’d use this to paint the environment in a Splatoon-like fashion. (Though I do now see the advantage of using colors rather than an extra UV channel: UV channels store only two scalars, while a color channel stores three or four.)

Boldly confessing my ignorance since 2011,

• Joe
1 Like

Yes, you use world coordinates and blend between them based on the normals. Saves you from doing the UV’s and solving the seam issue but results in more samples in your shader.

Instead of adding the “paint value” to a texture I’m suggesting you add it to the vertex color on the level mesh itself and use that as the mask between Team A’s pain, Team B’s paint, and the default appearance of the level.

This is basically the same effect as Portal 2’s gel. You render color into a low resolution render texture that uses the scene’s lightmap UVs, then peturb that texture lookup using a noise to hide the low resolution-ness of the underlying data. Advanced terrain shaders and vertex color based texture blending do the same thing to hide the low resolution control texture or vertex density.

Edit: there was a great post years ago where someone broke down exactly how the paint / gel was done in portal 2, but I can’t find it anymore. The page might not even exist anymore.

However this NeoGAF thread has a great example: http://m.neogaf.com/showpost.php?p=116699387&postcount=235

But doesn’t it also mean that you can’t have 2 parallel surfaces that “overlap” when viewed from any of the three projection directions? In other words, if I had two towers right next to each other, both part of the same mesh, then this triplanar projection trick wouldn’t work, because it would come up with the same texture coordinates for both boxes (except for the angles where they are side-by-side).

In fact, even a single box would (as I found the triplanar projection described above) end up with the same texture on opposite faces of the box, which is no good — painting one side shouldn’t paint the other. To avoid that you’d have to go to 6-plane projection, no?

Man, I feel dense today, because I just don’t see how that would work unless your level models are very polygon-dense. Picture a big box or wall; I’d expect it to be composed of 2 triangles. I shoot a little splat at some random place in the middle… what would I set for the vertex colors at the corners, to get a nice splat in the middle?

Thanks for that, that makes sense and is a great way to hide the low-res-ness of the texture you’re using to keep track of the paint.

It doesn’t really help with the problem of how to figure out WHAT texels to set… I’ve been thinking about this on and off all day, and still haven’t come up with anything better than the multiple ray-casting idea at the top of this thread.

OK, this is the second time I’ve heard something like this, and the second time I’ve failed to understand it. Perhaps it’s because I don’t have much experience with lightmaps. Can you break this down for me, or point me to a reference that explains it? What does lightmapping have to do with it?

Thanks,

• Joe

At its most basic, lightmaps are just textures. Textures that are unity uniquely UVed for the entire scene, at least for static geometry.

Do a raycast, use that position to draw into your custom texture. You’ll have to do multiple raycasts to deal with (literal) edge cases as just drawing a larger radius it might bleed into areas visually unconnected to where you hit. The alternative is you could use a 3D texture instead, but that might cause problems with occlusion, ie: paint a wall and the bank of the wall also gets painted.

1 Like

So, raycasting and drawing into a texture is easy enough. (I was able to quickly adapt the code we use for painting zones in High Frontier.)

But it’s the darn edge cases that are troublesome. You can see a seam in the cylinder at right, where it was cut for UV mapping; and there are similar discontinuities in the block at left (somewhat harder to see because I moved the mouse more and continued painting onto the next surface). Same problem where the block and cylinder meet with the ground.

I’m no longer certain that ray-casts are really the way to go. In Splatoon, most (maybe all?) weapons shoot blobs of paint that arc out of your gun, and splatter sometime later. They’re certainly not ray-casting from the camera or player. They might be ray-casting from their own position as they move forward each frame, but it’s just as likely they’re doing a sphere sweep.

So, maybe a better way to think of this is as a sphere collision test. You could even drill down to individual triangles, painting appropriately-sized blobs of color on each triangle according to how they intercept the sphere. Unity doesn’t provide such detailed collision info, but sphere/triangle intersection is pretty easy.

On the other hand, maybe I’m still approaching this the hard way.

Hmm this thread got me thinking.
I never thought about how this could be done, but if you guys are open for another idea…

What about re-meshing the entire scene, just like the nav-mesh does with its voxelizer?
You’d have essentially twice the vertices for the level, which would be pretty bad. A decimate filter like in blender could help… maybe, but it would still be pretty bad.

As for raycasting, maybe they have some system that’s heavily optimized for the game where they can indeed correctly identify all vertices in a given sphere and how to paint them. Then render every object in the level with a “paintable shader” that just blends between normal and the ink.

I wont be able to use fancy terminology here but give an idea of how I think it is handled.

What I believe they do is a liquid system(aka Particle system) that is coded to stick where it lands. after it does the initial splat animation it then becomes a object that can be interacted with (like swimming threw it)

I doubt thats what they do.
That would be a decal system. It would work (with a shader that maps in world coordinates), but imagine how many objects - and therefore drawcalls - that would cause.

Also it would have the same problems as what we already have. (no support for edges or when a wall meets a floor…)

No, there is no texture. You’re writing into the mesh.

So what? Verts are cheap. Your paintable mesh doesn’t have to be your collision mesh.

OK, so I think I get what you’re saying, @brownboot67 (and it may be what you meant too @dadude123 , though it wouldn’t be twice as many vertices — it’d be hundreds of times as many).

Build your level out of high-poly models, even for flat surfaces. Something like no polygon more than 0.1 m apart from another polygon (whatever scale you pick here determines the smallest drop of paint you can apply). Then to apply paint, you set the vertex color — probably just filling in the red channel for one team, the blue channel for a different team, etc.

Then you’d use a shader that takes the interpolated vertex color and thresholds it, probably with the addition of some noise as described here.

It’s a clever approach, though I’m not completely convinced verts are that cheap. But it would be very quick to apply paint, since you don’t have to write to (and upload) a texture — all you have to do is set vertex colors. Oh, and finding the vertices to set is also very easy; assuming the blobs of paint flying around are spherical, it’s just a point-in-sphere test, which is about as easy as it gets (though on the other hand, you’d have to do this for thousands of vertices).

Let’s do some quick math… say we have a vertex every 0.1 m; that’s 100 verts per square meter. The 10x10 plane in my image above would have 10,000 vertices. That 1x2x3 box would have (…doing math…) 19 m^2, or 1900 vertices. That’s a lot but it’s not completely outrageous. And maybe you could apply a LOD or two (a 0.2-m spacing would be 1/4 the verts, and a 0.4-m spacing would be only 1/16 as many).

This sounds worth a try!

OK, first quick test of that approach:

Wraps around edges and seams beautifully, and feels very fast. Rather blocky though, as you can see. This is of course with a simple threshold, with no noise, and no anti-aliasing where I paint the vertices.

Here’s a close-up in shaded wireframe mode, so you can see what’s going on…

Notice that the blockiness very much depends on whether the edge lines up neatly with the triangles in the mesh, or runs counter to it. In the floor strip above, it’s counter to the triangles, and so comes out considerably worse than a strip running the other way. This is an issue I ran into with zone painting in High Frontier as well. We were able to mostly hide it there, though I no longer remember exactly how… we may have added an extra vertex at the center of each square, so there was no strong preferred direction, at the cost of 25% more vertices and 100% more triangles.

But it might also be that adding some noise and a softer edge to the vertex colors will hide the edge well enough.

It is outrageous
You’d definitely need some octree or something to narrow down the vertices.
And yes they’re not that cheap.
I think that if you take a really good decal system and then make it so it expands the nearest decal instead of just blindly applying yet another quad (and that’s the best case, when you only hit a flat plane!!), you’ll get pretty decent results.

Limit how complex the “expanded decal” things can get at most.

Or if you go the mesh route, why not just add additional vertex attributes to all the models in the scene itself, instead of duplicating it?
I mean what you’d do is essentially copy the vertex positions, just so you have another vertex-color channel.
You could just reuse the existing vertex colors (if they exist, and if not then add them)… like, i don’t get why you’d have a full copy of everything even if that copy is simplified.

The only downside wouldbe that your whole level and all its props have to be drawn with your custom shader.
And the bad part about that is that you have to essentially make a copy of the standart shader and then add support for blending towards your goo/ink.

That will cause the shader to become pretty expensive, but if its just on PC you’ll likely never notice any problems ever.

In any case, if you really try this, let us know how it goes!

I dunno though… 10,000 polygons isn’t unusual for a main character, and it’s not uncommon to have lots of “main characters” running around these days. Here 10k polys is an entire section of the level. It might be OK.

But look again at the Splatoon screenshot at the top of this thread. There is SO MUCH PAINT all over everything, with complex edges and detail, all wrapped neatly around even curving shapes, and honoring the underlying normal maps… I just can’t imagine doing all that with a decal system.

I’m not sure where you got the idea we were copying anything. There is no copying here… I just made a high-poly version of the models in my modeling app, and used that instead of the low-poly version. (But as @brownboot67 pointed out, we could still use the low-poly version for the collision mesh.)

Meh. Shaders aren’t that hard. And I assumed from the beginning that a custom shader would be needed. It doesn’t have to be as complex as the modern Standard Shader; it only needs to support the features I need to support.

Yeah, if I were going to actually do this, it would be for PC, since my main interest would be in playing it with a keyboard and mouse. I suck at aiming with a gamepad!

I don’t know if I will actually make anything of it, though… mostly I just want to figure out how!

Yep, what you’re doing is what I’m thinking. You can allow much wider interpolation space and let the shader do the interesting bits to define the shape. Like a quarter of the loops on what you have would probably look just as good with some decent noise and thesholding.

Verts aren’t bad. Especially on PC/console. We’re doing close to a million verts a frame on medium power mobile devices in our new The Walking Dead project at 30 fps.

If you aren’t making a custom shader for each type of your assets you’re doing it wrong. At the very least you should be packing your data more efficiently.

3 Likes