How to "integrate" other objects

Hi there,

does anyone have an idea on how to write a shader which “integrates”/blends other objects with a given falloff?
In the CryEngine it´s called “soft depth testing”. I think this movie explains it better:

Is there a way to get this same effect with Amplify? I’ve already tried “Depth Fade”, but this only gives me transparency on the edges, not blending.

Thanks for any help!

You’d use alpha blending on your shader and modulate your transparency based on the current fragment’s distance from the depth in the depth buffer. (though you might have to do that pass with ZWrite Off, and then a second pass that only outputs depth, to avoid potentially doing blending to overlapping parts on the same object)

Thanks @Invertex for your help! I’ll try your advice with Amplify. Do you have an example on how to do that with Unity? Or do you have an advice on how to do a second pass with Unity + Amplify?

Again, thanks very much for any help! :slight_smile:

Do I need a GrabPass or CommandBuffer for this? I’ve tried to sample the depth from “_CameraDepthTexture”, but I can’t get it to blend correctly. I’ve got triplanar mapping to work correctly and the albedo blends fine, but how do I average the normals correctly? Any advice?
Thanks very much!

OK I think I’ve made some progress… I tried two different blending techniques:

  • Based on Alpha blending (this sucks, because of shadows behind the surface!)
  • Based on an extra pass which renders the terrain world normals into a texture. This texture I thought then could be used to blend vertex normals together.

So my blending shader looks like this:

My world normals render texture looks like this:
3489744--277862--upload_2018-5-8_19-50-4.png

The terrain and the cliff WITHOUT vertex normal adjustment looks like this:

WITH my normal adjustment it looks like this:

Why my vertex normals won’t blend correctly?
Isn’t this mathematically possible?
Is my only option to use a SDF (Signed Distance Field) texture like in UE4?

I don’t use the unity terrain but a simple mesh, so don’t have a heightmap available.

Thanks!

If you want to blend with terrain go get the Microsplat module off the asset store.

@brownboot67 As I said this isn’t a reasonable solution for me:

  • It needs additional baked “blend maps”, so it’s no realtime.
  • I don’t want to attach a “blend script” to each of my rocks and props.
  • AFAIK it only works with the Unity terrain engine. So I think they basically generating/reading out a heightmap of the terrain and feed a blend shader with that information.

What I’m trying to achieve is a realtime solution like CryEngine does (as seen in the movie above) or this one

.
I don’t think that a static generated heightmap is a requirement for this technique to work. Correct me if I’m wrong, but I believe it also must be possible with an additional render pass.

Are you sampling the depth normals texture and trying to use that? It’s a view-space normal. It looks like you’re just not doing the blend between the different spaces correctly. Are you doing this with a surface shader or fragment?

I’m not using the _CameraDepthNormalsTexture (because I had no success with that) but generating my own world space normals texture which I use for sampling. Maybe there is my misconception, but for this pass I’m transferring the normals in the vert shader from object space to world space and the vertices from object space to world space.
In my shown Amplify shader I then simply blend both world normals together and the result I then transfer back to object space as input for the surface shader.

So yes the second pass is a surface shader. The first pass for generating the world normals is a fragment shader.

At the basic level, yes, you need a “height map” of some kind. That height map can either be a terrain’s height map, or a distance field, or even a depth buffer. The key is it needs to be of just the “terrain” in question. I suspect Crytek is rendering out the terrain to its own depth buffer prior to rendering the rest of the scene and then blending against it. It’s a deferred renderer, so it’s just rendering into the gbuffer the blended normal. You should be able to do the same. I have no idea what exactly Amplify does when you modify the vertex normal, though I would expect it to royally screw with normal maps.

The way MicroSplat works from a high level explanation is it makes a copy of the terrain normals and height and stores it in static textures. Those are effectively views of the terrain from directly above it looking down. If you were to do the same, ie: render your “terrain” from directly above rather than from the camera view, you could much more trivially blend your meshes into the terrain in the fragment shader by calculating the world height relative to that height map and then lerping the normal in the same direction. You’d need to use the world to tangent matrix to transform the “terrain” normal into the form need by the normal node.

Thanks for this useful information! :slight_smile:

So as far as I understand you describing two different approaches:

  • Normal blending in screen space: First render everything else normally into the GBuffer. After that rendering the terrain mesh into its own GBuffer and blending its normals based on the terrain depth texture (needs multiple rendering passes).
  • Normal blending based on the usage of static pre-baked depth/height maps (in one pass possible).

Is my interpretation correct?

I think I prefer the “screen-space” approach with multiple passes, because this way I don’t need any pre-baking. How I would do this in Unity? Are these CommandBuffers the way to go?

Yes and no.

I was more saying there are basically two approaches in terms of “orientation” of the terrain-only “gbuffers”; screen space and world space. The baked method is always a view that’s top down in world space because that’s the orientation that terrain always is and because that works regardless of the view, but there’s no reason it has to be pre baked offline, or even that specific orientation. Really there’s no reason you couldn’t render the terrain once when the game starts up, or render small per-blended object (or object group) textures at runtime when needed.

The top down approach has the benefit of working universally and only needing the additional buffer draw once.

The screen space method is conceptually a little simpler, but harder in practice due to how Unity draws things and the controls you have. If you want per object blend settings drawing the terrain last and doing the blend there makes that harder. That means you’d blend with things that shouldn’t, like the player’s feet etc., unless you find a way to specially mark them. Also usually the terrain shader is one of the most complex shaders in the game, so doing an additional texture look up and blend may not be plausible.

@bgolus Thank you for that awesome input!!!

Not only you pointed out a simpler idea, but also showed me that the screen-space solution would be harder to control (what should be blended and what not).
I’m very grateful for your help and I will try the “top-down” approach by rendering the current terrain heights around the player into a texture and use that as input for the blending.
As far as I’ve done my homework the way to do this is by using a second top-down orthographic camera and Unitys CommandBuffers? Am I right or should I use own “Render Textures” attached to a camera? I think CommandBuffers will give me more control?

It’s not perfect, but I’m very pleased with the results so far…

3502791--279302--terrain_blending_alpha.gif

3 Likes