Heya,
I’m currently developing some sort of big dungeon/maze built from randomly positioned little tiles which i have already designed. These tiles are little terrains which i have stored as prefabs in my asset folder. I’m trying to assemble these terrains at run time. Right now all I am doing is spawning instances of these terrains and then positioning them.
But now I want their edges to blend in smoothly like when you manually add neighbouring terrains in the unity editor. So I tried using terrain.setNeighboors(left, up, right, bottom), but when the game loads the terrains still have wide gaps between them. Does anyone have any idea how I can fix this?
cheers, Alex
Using the API to set the neighbors won’t handle the blending for you. That just allows to to register neighbors for Painting operations.
This would affect gameplay, considering that you have gameplay elements on top of your tiles that depend on height information, but something you could do is define a blending distance for each tile and in a shader bind neighboring heightmaps and the target heightmap and blend between height values there. Then sync the texture data to CPU and update physics.
Another option would be to create terrains between your instantiated terrain prefabs specifically for blending so you don’t have to sacrifice any gameplay stuff you’ve set up in your terrain prefabs. For this you could probably just copy the shader code for the Create Neighbors tool
Hey wyatttt,
Thanks for the advice and for explaining the problem. However, I am not sure where to start for these two solutions… Do you have any links or code you could share?
Thanks a ton, Alex
Would you mind explaining what you mean so I can better visualize this?
Am I gathering that you’re suggesting to use a terrain shader to “blend” between two unique meshes with a third mesh that uses a particular terrain shader to change the mesh between them? If so, I never considered it possible to use shaders to transform the physics for meshes directly – How might this be done exactly?
Any examples (and shader locations or names) would be useful.
Also, just FYI – @wyattt (and to some other unity devs):
Please consider that many of your replies (like the ones above) are waaay above what (even I) probably realize is easily possible with the Unity Engine. This is mostly because there is a LOT of backend design that most users have NO idea is there (or what its purpose is for), so regular users tend to be YEARS behind you guys in understanding what is possible with Unity since we tend to learn most things from trial-and-error (and tiny questions like these).
So for you guys to be like “Yeah, just do this and then put together a little system for that, then copy/paste some unspecified this from an unspecified that, then BOOM, your problem is fixed!” – This tends to just leave most users’ heads spinning, rather than helping them. People just cannot grab onto that kind of thing without the proper context.
Unity is like a puzzle – Unlike most users, you know its pieces well, and what those pieces can be used for.
I apologize if I sound like I’m beating up on you – I’m not. – You aren’t the only dev I’ve seen this from! – Plus, I know you keep this in mind to some extent, but I also know you’re busy. Still, these kinds of replies to users seem all too common. Rather than “answering” our questions quickly, with blasts of vague information users must (but probably can’t) entirely decipher, why not take just a moment more and offer users more interesting pieces of that puzzle to look at, pointing them at how (exactly) those pieces might be used? More visual examples / code snippets / screenshots / links to similar ideas can never hurt.
In fact – you’ll get a much higher quality community after a while who can help each other better, meaning less time answering the same questions (or slight variations of them) over and over again in the future. There is very little practical documentation in Unity in areas like WorldBuilding, so it’s a win-win when users can point people back to your posts!
Just my two-cents. Hopefully it helps.
I believe you are absolutely correct and thank you for pointing this out. I will provide a more detailed description of what I meant and make another post.
I was suggesting this but not the part about changing physics in the shader. The third mesh ( or terrain ) would just be there to blend between the adjacent Terrain Prefabs. Assuming you do the blending with a shader to modify the heightmap of that third Terrain, the way that you would get physics to work properly is you’d have to sync the heightmap data and rebuild the physics collider on the CPU.
P.S. I feel like I am years behind a lot of our users!
Ok, so:
The Theory
You have 2 tiles that you want to blend
To do this, you could blend the two tiles themselves which might be a little more complicated and would modify your Terrain instances
OR
you could create a third terrain tile, in between the two Terrain Prefabs, that is only there to hide the seam. This is the tile you would perform the blending on so that your Terrain Prefabs remain untouched ( thus not affecting any gameplay related items in the Terrain Prefabs that depend on the hieghtmap like placements of buildings etc. otherwise, they might be floating or under the terrain ). And these Terrain Tiles can be any width of your choosing ( width = the distance in which you allow the blending between any two Terrains to occur )
So you create a new Tile in between the Terrain Prefabs
Then you have to perform blending. In this scenario you do this via shader ( the alternatively would be on the CPU ). I provide some thoughts on how to do this in a bit but this is one result of performing the blending:
There’s another case that needs to be handled however and that is this case:
The issue here is that you have generated Terrain Tiles to blend all the adjacent Terrains but now you need to generate a Terrain Tile to blend between the 4 generated Terrain Tiles.
Once you have your Terrain Tiles generated, if you need Physics to work for them, you will have to sync the heightmap from GPU to CPU and rebuild the Physics Collider. The way you do this depends on whichever Unity version you are on. In newer versions of Unity, iirc you should be able to use terrain.SyncHeightmap.
Blending
You will need your shader to reference at most 4 neighboring Terrain Heightmap RenderTextures. You can use the UV of the Terrain heightmap you are using to blend in order to determine how much weight the neighboring Terrains have on the height value at that particular UV.
The idea is that as U or V approaches either 0 or 1, you want the heights of your blend tile to match the heights of the neighboring Terrain along the seam.
You could do something similar to the built-in Create Neighbors Tool. That’s at least a good starting point. The shader code is at the end of the post.
Using that shader might not give you the intended results, however, since it is basically either:
- mirroring the heights of the neighboring Terrains
- Linearly interpolating the heights of the neighboring Terrains
so other options would be to also apply some noise along with the blend or, if you want to get really complicated, you also might be able to do an fast-fourier ( FFT ) analysis on the Terrain and extrapolate the “next” height values for each neighboring Terrain. You would then blend between those values for the actual heights of the blended Terrain Tile
NOTE: You might also want to blend the Terrain textures for the tile
Here is the C# reference for the Create Neighbor Terrain Tool
Shader code for Create Neighbors Tool:
Shader "Hidden/TerrainEngine/CrossBlendNeighbors"
{
Properties
{
_TopTex ("Top Texture", any) = "black" {}
_BottomTex ("Bottom Texture", any) = "black" {}
_LeftTex ("Left Texture", any) = "black" {}
_RightTex ("Right Texture", any) = "black" {}
}
SubShader
{
Pass
{
ZTest Always Cull Off ZWrite Off
CGPROGRAM
#pragma vertex vert
#pragma fragment CrossBlendNeighbors
#pragma target 3.0
#include "UnityCG.cginc"
uniform float4 _TexCoordOffsetScale;
uniform float4 _Offsets; // bottom, top, left, right
uniform float4 _SlopeEnableFlags; // bottom, top, left, right; 0.0f - neighbor exists, 1.0f - no neighbor
uniform float _AddressMode; // 0.0f - clamp, 1.0f - mirror
sampler2D _TopTex;
sampler2D _BottomTex;
sampler2D _LeftTex;
sampler2D _RightTex;
struct appdata_t
{
float4 vertex : POSITION;
float2 texcoord : TEXCOORD0;
};
struct v2f
{
float4 vertex : SV_POSITION;
float4 texcoord : TEXCOORD0;
};
v2f vert (appdata_t v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.texcoord.xy = v.texcoord;
o.texcoord.zw = (v.texcoord + _TexCoordOffsetScale.xy) * _TexCoordOffsetScale.zw;
return o;
}
float4 CrossBlendNeighbors(v2f i) : SV_Target
{
// All slope offset data is static, but we calculate it on GPU because we don't want to access height data on CPU
float2 topSlope = float2(UnpackHeightmap(tex2Dlod(_LeftTex, float4(1.0f, 1.0f, 0.0f, 0.0f))), UnpackHeightmap(tex2Dlod(_RightTex, float4(0.0f, 1.0f, 0.0f, 0.0f)))) + _Offsets.zw;
float2 bottomSlope = float2(UnpackHeightmap(tex2Dlod(_LeftTex, float4(1.0f, 0.0f, 0.0f, 0.0f))), UnpackHeightmap(tex2Dlod(_RightTex, float4(0.0f, 0.0f, 0.0f, 0.0f)))) + _Offsets.zw;
float2 leftSlope = float2(UnpackHeightmap(tex2Dlod(_BottomTex, float4(0.0f, 1.0f, 0.0f, 0.0f))), UnpackHeightmap(tex2Dlod(_TopTex, float4(0.0f, 0.0f, 0.0f, 0.0f)))) + _Offsets.xy;
float2 rightSlope = float2(UnpackHeightmap(tex2Dlod(_BottomTex, float4(1.0f, 1.0f, 0.0f, 0.0f))), UnpackHeightmap(tex2Dlod(_TopTex, float4(1.0f, 0.0f, 0.0f, 0.0f)))) + _Offsets.xy;
float2 topSlopeOffset = _Offsets.y + _SlopeEnableFlags.y * topSlope;
float2 bottomSlopeOffset = _Offsets.x + _SlopeEnableFlags.x * bottomSlope;
float2 leftSlopeOffset = _Offsets.z + _SlopeEnableFlags.z * leftSlope;
float2 rightSlopeOffset = _Offsets.w + _SlopeEnableFlags.w * rightSlope;
float2 blendPos = saturate(i.texcoord.zw);
float4 weights = 1.0f / max(float4(1.0f - blendPos.y, blendPos.y, blendPos.x, 1.0f - blendPos.x), 0.0000001f);
weights /= dot(weights, 1.0f);
float4 heights = float4(
UnpackHeightmap(tex2D(_TopTex, float2(i.texcoord.x, (1.0f - i.texcoord.y) * _AddressMode.x))),
UnpackHeightmap(tex2D(_BottomTex, float2(i.texcoord.x, 1.0f - i.texcoord.y * _AddressMode.x))),
UnpackHeightmap(tex2D(_LeftTex, float2( 1.0f - i.texcoord.x * _AddressMode.x, i.texcoord.y))),
UnpackHeightmap(tex2D(_RightTex, float2((1.0f - i.texcoord.x) * _AddressMode.x, i.texcoord.y)))
);
heights += float4(
lerp(topSlopeOffset.x, topSlopeOffset.y, blendPos.x),
lerp(bottomSlopeOffset.x, bottomSlopeOffset.y, blendPos.x),
lerp(leftSlopeOffset.x, leftSlopeOffset.y, blendPos.y),
lerp(rightSlopeOffset.x, rightSlopeOffset.y, blendPos.y)
);
return PackHeightmap(dot(heights, weights));
}
ENDCG
}
}
Fallback Off
}
Let me know if I need to explain any bit in more detail and thank you again @awesomedata
@AlexandreBourgoin sorry for providing a really vague and unclear reply
Dear @wyatttt ,
thank you so so much for this extremely clear and developed answer. I am so grateful to be part of these huge comunity of unity developers who will never stop amazing me. I agree with @awesomedata that although we have many experienced programmers out there like you, not many seem to be able to provide us with comprehensible and simple answers. Nevertheless, you have proven us that although you have to probably deal with many posts a day, you’re all nice guys who will always offer a helping hand.
Cheers again for all your help @wyatttt and @awesomedata
Alex
YOU are awesome! – Thanks so much for the clear explanation (and pictures!) – Although I didn’t intend to imply you should go that far, it definitely made a BIG difference! – A picture really is worth a thousand words.
I totally agree! – We love you for that, @wyatttt !!
And honestly, I think there are so many unseen (positive) effects that come from taking the time to offer substance like this, that people tend to underestimate the raw power of the goodwill that comes from just saying something as fully as you really should in the first place.
Even though it doesn’t always seem that way, it always pays to ensure there is substance – a genuine and real (practical) weight – to what you say or do in this world for others. That substance always connects with others, and is what makes it worthwhile to be on either side of a given human interaction – or communication, in this case.
You have gone above and beyond here @wyatttt . Fantastic.
There is a LOT to learn and understand to implement things like this. Above my pay grade, for sure! But it’s all a matter of giving it a go, and learning as you go.
For what it’s worth, it all depends on the needs of your game, but it could be easier (less steps involved), to join all your tiles up side by side, and then blend the terrain edges towards one another, by iterating over the heightmaps of each terrain with some sort of “averaging out” between neighbours.
That way you don’t have to create extra terrains between your terrains.
But who knows, maybe adding the extra terrains in between is actually easier.
A good educational post, whatever the case!
Agreed. It depends on your use case and the type of content you are trying to create.
To clarify, I suggested adding new tiles in case there were gameplay elements placed on the Terrain tile prefabs and if those elements would break if the Terrain Tiles themselves were modified. If that’s not the case, you could definitely simplify it with something like what @muzboz detailed