Splash Cars, how did they do that?

I came across this fantastic game a couple weeks ago and since then, I’m trying to figure out how are they doing this paint effect in real time. Would be great if somebody could break it down for fellow developers like myself.

Here are few approaches that I have in mind but none of these seem to be the ideal case here.

1: The levels are made of modular meshes and for every mesh there is a second mesh underneath it, for every mesh in upper layer, there’s a grayscale material applied to it while the meshes in lower layer, has the colored materials. With OnCollisionEnter, they are dissolving the upper mesh and revealing the lower colored geometry and creating an illusion of paint effect there.

2: Using some sort of particle system as shown in the video below, can it be? idk.

Please let me know if I’m right or guide me if I’m not.

It’s pretty easy with a shader.

Probably everything has two textures, one in grayscale, another in color. A third input texture (think of it as an overlay) tells the shader whether to use the grayscale or color texture at each location. For example, the input texture may start as pure white (indicating grayscale) and as the car moves the program draws in black areas (indicating color) on the input texture.

There are multiple ways this could be done, and it would be easy to add other effects like cross-fade, although there are probably mobile performance considerations. I don’t do mobile, wouldn’t know about that in any detail.

4 Likes

I say you could replicate it with just a bunch of various shaped(random rotation/scale) decals that expand out a bit as the car moves… It’s not how I think they are doing it, clearly looks like some texture or shader implementation and not overlayed ‘decals’ that are added and removed based on the vehicle …but I think you could replicate the effect pretty quickly with the decal approach.

1 Like

I’m pretty sure decals couldn’t do that.

As for “quickly”… the guts of the shader approach I described could be done in a single line of HLSL (though any fellow developers would probably appreciate some comments). Off the top of my head it would look something like this:

return lerp(tex2D(_TxGrayscale, i.uv), tex2D(_TxColor, i.uv), abs(tex2D(_TxMask, i.uv).r == 0.0)));

Of course, “decals” are just textures with transparency that are thrown on-screen as the last step in whatever else a shader did, so even if you could figure out a way to make decals work, you’re still talking about shaders. And since the Standard Shader’s Detail Albedo is such a pain in the ass (judging by all the questions, confusion, and unhappiness over the loss of the old dedicated decal shader), if you did come up with a decal technique, you’d still probably end up writing or buying a shader.

4 Likes

That’s what I was thinking, same concept as a splat map (and maybe they are even dynamically updating a splat map for the ground, or updating an alpha channel), and the buildings and such could be accomplished with a custom shader. Might not even require two textures, might just be using an alpha mask of some kind on the colored texture, just some input parameters that cause it to color the model.

If you watch the way the buildings colorize, I feel pretty sure it’s the exact same technique. It looks like the mask starts out grayscale and the “colorize” color is an expanding circle drawn on that mask texture, then the UV maps just do their thing on the geometry.

I thought about the mask applying grayscale conversion to a colored texture, but the grayscale version is different. For example, in the YouTube thumbnail, the dirt path in color has speckles, but the grayscale version is a solid color.

2 Likes

Forgot to reply to this part. True, the alpha channel could be toggled as the mask. Then it would be just two textures. Two lines of code, but now potentially one texture lookup instead of two…

float4 colorized = tex2D(_TxColor, i.uv);
return lerp(tex2D(_TxGrayscale, i.uv), colorized, abs(colorized.a == 0));

However, frequently writing any texture to the GPU is asking for trouble. It may perform better to use a separate third mask texture and set that one to the smaller Single Channel texture type so you’re writing a lot less data with each update.

Edit: As written, there is a bug – colorized is selected when alpha is zero, which is transparent… In this case it would make more sense to use the alpha channel on the grayscale; that way alpha 0 selects the colorized version, and alpha 1 selects the grayscale version, and as a bonus you don’t need an extra line of code to artificially override the alpha for the return value. Alternately, if you use the colorized alpha, reverse the logic. Hyper-over-optimization FTW!

2 Likes

I really appreciate all the replies, you guys are amazing!

I’m not a shader guy, but I think this is what I’m looking for, though, I’ve a shader I end up creating after a lot of hair pulling in Shader Forge, with that shader, I can lerp between two textures by sliding alpha value of a (third) mask texture, but drawing in black areas on the fly within specific area without effecting whole texture is what I think is tricky part here.

Attached herewith is my node setup in SF and the shader file just in case if some Shader Forge guy could point me to the right node setup or some of you could modify the code for me.

2992124--222824--node setup.jpg

2992124–222823–dualmapsSF.shader (23.3 KB)

Well I use EasyDecal (honestly another type of gamedev feature Unity should have more advanced ‘built in’ support for) so that kind of handles the extra shaders for more advanced decal implementations, where I think you could get away with a decal implementation of some sort… … in anycase I think drawcalls adding up as the level time goes on is probably the bigger problem then. Which I don’t think they do for that, but if you have a decal asset that would make doing a prototype using that method more Easy, not exactly hard as an approach and less technique knowledge on shaders and updating textures code like that… which is a technique far less people would be familiar with… even Unity don’t even get too those sort of advanced tutorials.

Yeah when I read your post, I realized I didn’t remember seeing any decal shaders (I started with Unity 5) so I did a quick search to see if I’d overlooked it, and encountered all the misery associated with the Detail Albedo thing. Kind of surprising. PBR is great but not everything in the world needs that kind of complexity.

That would be done by the game at runtime.

It looks like they’re doing a simple trail effect. For example, maybe every few frames it adds the player’s xy coords and a timestamp to a list. The timestamps in the list are used to draw expanding “blots” on that mask texture centered on the xy coords during the next several frames, and after some duration that xy/timestamp entry is dropped from the list. I guess in that sense it’s sort of like a particle effect, but the particle API would be major overkill for that type of thing. (Their actual process looks a little more complicated, the shape looks more like a boat-wake shape, but I think the general principle applies.)

Edit: Just realized you added the .shader file to your post – I don’t have ShaderForge. That blend amount is just a float so it applies to the whole texture. I don’t know if ShaderForge can express this, but you’d need to sample the blend texture by UV, then apply that value as the lerp step.

1 Like

To expand on this a bit, I’m guessing by looking at the Shader Forge screen shot, but I’d get rid of all that blend stuff in the middle and wire one of the Mask channels directly into the third lerp parameter. (In my first one-liner shader earlier, I used the red channel.) Then the game code (on the C# side) could update the Mask texture at runtime as described above.

1 Like

It certainly looks to me like you’re just driving a Photoshop brush around a layer mask. Probably not all that hard once you know the basic technique, although I personally have no clue how one might implement such a thing.

1 Like

Seems easy enough, use a large render texture which is drawn onto the ground plane, and then simply render a simple particle trail to it over the course of some frames as the car moves around. Considering you could likely render most of or an entire 4096x4096 render texture PER FRAME with graphical changes, drawing the small area of influence behind the car is no big deal.

2 Likes

Seems similar to something I was trying to figure out, but never got an answer to.

1 Like

Through the miracle of “reading the rest of the thread,” you, too, can learn Terrible Shader Secrets.

Seriously, apart from all the declaration junk needed to expose the three textures and wire up the frag function, the code I posted is literally all that is needed on the shader side. Then as IH states, you’re just drawing white and black on a render texture and loading it back into the mask texture.

That’s it. Literally.

Yup.

2 Likes

Shaders scare me for some reason. I feel like I should be able to learn how they work, but they always seem like black magic so I end up making the sign against the evil eye and running away.

They’re some of the most programming fun I’ve had in recent memory. I bit the bullet and learned the basics about a year ago. Anybody reasonably handy with C or C# (or probably the J-word) will have their bearings quickly. Unity’s basic tutorials are ok (not great, they really only scratch the surface), then I found it was enough to start practicing GLSL on ShaderToy. It’s pretty trivial to translate GLSL into HLSL, and next thing you know, you’re finding real solutions.

There’s a LOT of weirdness associated with them, it’s one of those situations where the language only approximates what is really happening, documentation is universally shitty, and there is too much Ultra Secret Knowledge that will trip you up, but at the end of the day, where else can you play with parallel processing on thousands of cores – and make pretty pictures, to boot?

Go for it.

That’s the flip side of it. I remember when I first started writing Photoshop filters, it was just mind-blowing how i could write a half dozen lines of code that made massive changes happen in the image. I feel like if I wrapped my head around shaders, it would be the same kind of thing.

But ooh, scary. :stuck_out_tongue:

I’m definitely still learning, especially when it comes to vert shaders and lighting, but yes, it’s something along those lines. The lack of a good way to debug anything can be maddening. There are some interesting new tools that have cropped up lately but it’s all still pretty primitive by modern standards.

Before I end my hijack of this poor dude’s thread, beyond the Unity tutorials I have two good links I recommend for anybody wanting to get their feet wet.

This guy cranks out a ton of really simple, highly focused, single-effect shaders that are great from a “learn by reading the code” standpoint.

http://www.shaderslab.com/index.php?

And Jasper’s series are sort of famous around here – good shader stuff in his Rendering section:

http://catlikecoding.com/unity/tutorials/

1 Like

Handful of ways with shaders. simple masking for the 3d objects. For the ground could be done a couple of ways. the masking, or even just a simple geo sorting and meshes. Geometry+1, etc.