What is wrong with my tiling shader?

I’m building a prototype for a game concept including a large tiled map. (Yes it’s a sidescroller, no it’s not a platformer) I’m aiming to have around 10.000 tiles on screen at one time.

I tried rendering the tiles overlapping the camera with DrawMesh/DrawTexture, but this resulted in a lot of drawcalls. (25fps)

My current approach is a paging system to load chunks of tiles overlapping the camera. Chunks are currently 32 tiles wide. Every tile is 16 pixels wide. Chunks are only updated when the map changes. This gives me over 1000fps. I’m using a custom shader and dependent texture lookup to render the right tiles at the right places. MainTex is the tilesheet. TileData is a 32*32 pixel texture containing numbers of tiles to render. I thought I was being smart but apparently not.

There is a bug in my implementation that I find very hard to track down. The following map is filled with stone tiles with a round gap of air tiles in the center. As seen in the image, tiles are somehow offset or otherwise misplaced. The black lines show where tile edges should be. I can’t figure out where the problem lies, or if the problem is more fundamental.

shader:

// This shader renders a region of tiles from a tilesheet
// The tile sheet texture is _MainTex.
// Tiles are numbered from left to right, top to bottom.
// Dependent texture lookup is used to determine which tile to render at which location
// The tile data texture is _TileData
// _TexTilesWide and _TexTilesHigh specify the number tiles in the tile sheet
// _Tiles specifies the number of tile rows and columns in the region to render.
// The tile data texture should have a width and height of _Tiles pixels.
Shader "Tiles/Tilesheet TopBottom"
{
	Properties 
	{
		_MainTex ("Base (RGB)", 2D) = "white" {}
		_TileData ("Tile Data", 2D) = "black" {}
		_TileColor ("Tile Color", Color) = (1,1,1,1)
		_TexTilesWide ("# Tiles in texture row", Float) = 16
		_TexTilesHigh ("# Tiles in texture column", Float) = 16
		_Tiles ("# Tiles", Float) = 16
	}
	SubShader 
	{
		Tags { "RenderType"="Opaque" }
		LOD 200
		
		CGPROGRAM
		#pragma target 3.0
		#pragma surface surf Lambert

		sampler2D _MainTex;
		sampler2D _TileData;
		float4 _TileColor;
		float _TexTilesWide;
		float _TexTilesHigh;
		float _Tiles;
		
		// Take a small margin to prevent bleeding
		float2 margin = float2(0.0001,-0.0001);

		struct Input
		{
			float2 uv_MainTex;
			float2 uv_TileData;
			float2 tileUV;
		};

		void surf (Input IN, inout SurfaceOutput o) 
		{
			float2 diff = 1 / float2(_TexTilesWide, _TexTilesHigh);
			float2 correction = diff / 2;
		
			// Base uv
			float2 main_uv = frac(IN.uv_MainTex * _Tiles);
			float2 tile_uv = IN.uv_TileData;
		
			// Calculate tile uv
			float4 tiledata = tex2D (_TileData, tile_uv);
			float tiletype = floor(tiledata.a * 255);

			float u_offset = frac(tiletype/_TexTilesWide);
			float v_offset = 1 -floor(tiletype/_TexTilesWide) / _TexTilesHigh - 1/_TexTilesHigh;
			float2 uv_offset = float2(u_offset,v_offset);
			
			float2 combined_uv = uv_offset + main_uv * diff;
			
			half4 c = tex2D (_MainTex, combined_uv);
			
			o.Albedo = c.rgb * _TileColor;
			//o.Albedo = float3(tiletype,0,0);
			clip(c.a-0.5);
		}
		ENDCG
	} 
	FallBack "Diffuse"
}

[edit]

Some more info:

_Tiles is the number of tiles to be rendered on the quad by this shader.
_TexTilesWide and _TexTilesHigh are the number of tiles in the tilesheet texture.

float2 main_uv = frac(IN.uv_MainTex * _Tiles); is used to divide the quad into tiles with uv in the range 0-1 first.

uv_offset calculates which tile to display in that range. Example:

pixel value:        row:        column:
0                   0           0
1                   0           1
15                  0           15
16                  1           0
17                  1           1

uv_combined then should contain the final uv values by combining the two. The rest is just sampling the tilesheet texture.

I think the problem is where you lookup the texture number (dirt or water,) and I think the solution is changing the filter mode to “Point.”

Imagine your 32x32 tileNumber grid (you have _Tiles set as 16, but say it’s really 32. Going to assume you have it changed to 32 in the material Inspector.) Say on a 0-255 scale part looks like:

0 0 1 0
0 1 1 0
0 0 0 2

The part where you look it up (I’m guessing uv_TileData corresponds to the world x/z position):

float2 tile_uv = IN.uv_TileData;
...
float4 tiledata = tex2D (_TileData, tile_uv);
float tiletype = floor(tiledata.a * 255);

Across the top row, the shader sees interpolated values going from 0, up to 1 at the exact center of tile#3, then back down to 0. So your floor is giving you solid 0’s (you’d think you get a single 1 pixel, but odds are you “jump” it.) The second row is similar – you get solid 1’s only from the center of tile #2 to the center of tile#3. floor turns the left side of tile #2 (going from 0.5 up to 0.99) and right side of tile#3 (dropping from 0.99 to 0.5) into 0’s.

Does your top-left area have 4 “Dirts” going across (which are trimmed into 2 and 2 halves?) As a test, try giving your look up texture a single row/col of 1’s. That should produce nothing. A 2x2 group of 1’s should make a single 1 tile, and a double row of 1’s should make a singe 1’s row.

Could maybe try floor(tiledata.a*255+0.5); to “round to the nearest.”

The last row has a different problem. Values from tile#3 to #4 go from 0 to ONE (at exactly the border) to 2. You’ll get slightly smaller terrain 0, 1 and 2. instead of just 0 and 1. The shader can’t tell, and doesn’t care, that none of the pixels actually is a 1.

Setting the texture lookup to “Point” completely turns off interpolation. It just gives you the value of the nearest pixel. In other words, it forces the shader to think of it as a 2D array and to pick one actual value from a box.