How to properly sample (for 3D persp) a tiled sub-tex / sprite in a texture atlas?

I’m not talking about mip-bleeding and I don’t want to pad my sub-textures as I think it should be possible to fix filtering in my fragment shader… just wondering how :roll_eyes:

I heard about ddx/ddy (aka dfdx/dfdy in GL) to be passed to tex2D(tex, uv, ddx(foo), ddy(foo)) but have no idea what to use for ‘foo’ :smile:

First off, my setup. Here’s my debug atlas, 2048x512 with 4x non-padded 512px sub-textures in it:

https://dl.dropboxusercontent.com/u/136375/img/atlas.png

Now the individual images have what may look like borders but they’re supposed to be part of the respective sub-texture, keep that in mind :wink: they’re not supposed to provide padding. Probably should switch to better real-world textures…

Here are the import settings for this atlas.png — no mips, no aniso, no trilinear (only bilinear) filtering, no compression, no downsizing:

https://dl.dropboxusercontent.com/u/136375/img/screens/unity-atlas-01.png

(This is because I want to attack standard filtering artifacts before tackling any possible mip-bleeding that may occur later on…)

Here’s my “texture-atlas tiling” surface shader, pretty simple (prototyping quality code) and overall almost does the job finely, except for the border seam that’s bothering me:

Shader "Custom/azAtlased" {
	Properties {
		_MainTex("Base (RGB)", 2D) = "white" {}

		// xy is atlas slicing (here we have 4 sprites in 1 row)
		// zw is tiling (here repeat 2x in X and 3x in Y)
		_SlicingAndTiling("_SlicingAndTiling", Vector) = (0.25, 1.0, 2.0, 3.0)

		// which sub-texture to render, anything between 0 and 3 in this setup
		_TexIndex("_TexIndex", Float) = 1
	}
	SubShader {
		Tags {
			"RenderType"="Opaque"
		}

		CGPROGRAM
		#pragma glsl
		#pragma target 3.0
		#pragma surface surf BlinnPhong exclude_path:prepass nolightmap noforwardadd novertexlights

		sampler2D _MainTex;
		float4 _SlicingAndTiling;
		float _TexIndex;

		struct Input {
			float2 uv_MainTex;
		};

		void surf (Input IN, inout SurfaceOutput o) {
			float2 uv = IN.uv_MainTex;
			uv = (frac(uv * _SlicingAndTiling.zw) + _TexIndex) * _SlicingAndTiling.xy;
			float4 col = tex2D(_MainTex, uv);
			o.Albedo = col.rgb;
			o.Alpha = col.a;
		}
		ENDCG
	}
	FallBack "Diffuse"
}

Now there’s a tiny seam that always appears the same way. Tried all combinations of different filtering (bi/tri), mips on/off, wrap clamp/repeat, aniso 0/1/2/9 — it’s always there, and it’s always at most 1px wide no matter how far or close to the geometry the camera is:

https://dl.dropboxusercontent.com/u/136375/img/screens/unity-atlas-02.png

[right-click / open-in-new-tab for full-size view)

So if this happens with mips off it cannot be LOD related so I’m thinking ddx/ddy won’t help here anyway.

The seam only occurs across one axis (because there’s only 1 row in the atlas). It’s always flickering between the neighboring tiles’ white and/or orange tint, so the shader is clearly sampling into those areas of the atlas.

The only mode that this seam does not appear is Point filtering.

I vaguely remember from earlier days of playing with GLSL there’s a way to change the default “center of a texel” for the vertex (uv) interpolator but not sure if this would even help, and then we probably don’t get access to that from ShaderLab?

Would ddx/ddy help here at all, seeing as this isn’t a LOD issue because I get this even with mips off? I’m using floats instead of halfs so maybe there’s a smart way to ever so slightly downscale the uv by that tiny fraction that is actually “oversampling”?

Figure out on which sides of the sampled texture the seam appears and then move the UVs half a pixel in that direction.

The texture seam problem you’re encountering is because your UV coordinates follow a sawtooth function where there is a discontinuity every time you wrap from the end of a tile back to its beginning. The short form of tex2D computes its own partial derivatives, and gets very high values at those discontinuities.

The extra arguments to tex2D are the approximate partial derivatives of your UV coordinates with respect to screen space. What you want do is use your UV parameter, but without the sawtooth wrapping applied. Transform it as usual, but with the frac() part. Then use ddx/y to get the partial derivatives, and pass those to tex2D.

Please note that I’ve never done any of this, so I might be completely misinterpreting the documentation. Let me know how it goes!

Thanks both of you! I did get it to work well with ddx only once I removed my own tiling logic and for simplified experimentation used the standard tiling, those seem to pre-transform the uv I’m getting in surf and with those I could do a ddx/ddy that performs as required with tex2d.

Still mystified what I should pass to ddx/ddy when applying my own tiling though, I pretty much “tried all possible combinations” (I know, a deeper understanding of the behind-the-scenes would be the preferable approach but “oh-well” :smile: ) without too much success, but will keep investigating :wink:

When you say:

I’m not fully certain how to apply this advice with regards to my above shader code to be honest :smile: do I do ddx(frac(uv))? Or ddx(frac(uv * transform))? Or… :slight_smile:

The UV coordinates you’re passing are discontinuous. They’re correct, but discontinuous. The discontinuity is what gets you your tiling, but it’s also what screws up the mipmap selection. The long form of tex2D() lets you pass your own derivatives. You need to pass the derivatives of your transformed UVs, with the exception of the part that makes them discontinuous: frac().

            float2 uv = IN.uv_MainTex;
            uv = (frac(uv * _SlicingAndTiling.zw) + _TexIndex) * _SlicingAndTiling.xy;
            uvContinuous = (uv * _SlicingAndTiling.zw + _TexIndex) * _SlicingAndTiling.xy;
            float4 col = tex2D(_MainTex, uv, ddx(uvContinuous), ddy(uvContinuous));
1 Like

I see, thanks a lot for the clarification! Will let you know how it goes :wink:

What bugs me is if I want to support different tilings for different sprites later on, seems I’ll have to compute ddx/ddy multiple times in surf, at least when blending sprites. But OK…

Yes that did the trick, good stuff and many thanks! :wink:

Deleted my previous post – because I’m STILL fighting with this actually. Although Daniel Brauer’s advice on discontinuous coords and ddx/ddy (thanks again!) helped select the correct mip level:

https://dl.dropboxusercontent.com/u/136375/img/screens/unity-tiledatlas-01.png

On the left is for comparison / “goal state” a non-atlased individual texture with bilinear and standard mips and aniso=0 and tiling 333,3 (it’s an extremely looong stretched plane I’m testing on to cover the extremes better).

On the right a plane with same dimensions using the tiled-atlased shader and selecting the atlased sub-texture equivalent to the above non-atlased one, same tiling. It “works” —again, you can see it does select the proper mip-levels— as long as there’s point filtering (which is the case in the above screen).

The dark tone in the very back is just the lowest mip-levels, happy to ignore those.

Code for this:

inline float4 texAtlased(in Input IN, in float2 tileIndex, in float4 tilingAndOffset) {
	const float2 uvt = IN.uv_MainTex * tilingAndOffset.xy + tilingAndOffset.zw;
	//	tiling
	float2 uv = frac(uvt);
	//	move to specified tile  slicing
	uv = _SlicingAndSize.xy * (uv + tileIndex);
	//	same steps for correct mip-lod
	float2 uvd = uvt;
	uvd = _SlicingAndSize.xy * (uvd + tileIndex);
	return tex2D(_MainTex, uv, ddx(uvd), ddy(uvd));
}

Now the original challenge comes back with a vengeance: removing the seams resulting from any filtering (other than point) sampling around the edges of the tiled sub-tile! Here’s the above code with the atlas also set to bilinear/aniso0:

https://dl.dropboxusercontent.com/u/136375/img/screens/unity-tiledatlas-02.png

It is clear, similar to what Dolkar said, that the uv-coords (ranging from 0…1) need to be remapped at the edges to a new range within 0…1 that just-about avoids oversampling over the tile edges. For example naively one could remap from 0…1 to something like 0.03-0.97. That actually works very well over small known geometry but actually destroys correct mip-level selection, and there’s no single magic value that works well across all kinds of scales and tilings and splicings etc. Some nvidia doc I found suggested “0.5 / tile-size” so that’d be 0.5/512 in my case. That does fix the seams that still occur even with no mipmaps (which in practice isn’t desirable ever) but in the scenario right now as illustrated above it’s just another useless magic value that’s “too-much close-up but too-little further away”:

Looking at the above screenshot, it becomes clear that the bleeding seams aren’t even present very-close-up, but further away they slowly increase and in the far distance they actually take over.

So at the closer mip levels no or very-little remapping should take place, at the further mip levels I would need to remap the uvs increasingly.

Sounds like another case for ddx/ddy or fwidth right? But now I’m really at a loss – on the one hand we use the uv’s for ddx/ddy, on the other we’d need ddx/ddy for proper uv-remapping.

Pretty tricky right? Any ideas at all from the gurus? My brain is pretty kinda fried right now… amazing the time I’m spending on this without ever getting anywhere. But I think I’m so close – somehow depending on “screen-space change” uvs need to be remapped into a smaller range, without overshooting and killing proper mip-level selection… I played a bit aimlessly with using fwidth or min(ddx,ddy) or max(ddx,ddy) as the remap factor but no luck.

I think this is the point at which you should consider another approach. You’re just trying to reduce draw calls?

What sort of game are you making?

@MetaLeap,
Whats the whole point of adding tiled textures in atlas and modifying the shader to achieve tiling again. It will add a new draw call if you modify the material instance right. Even i was trying for this method but could not find any right way.

sorry to see that this implementation caused such problem.

you can read any part of the texture from 0-1 of each offset or from .001 to .999, and oviously check your atlas packing margin = 0… why have margin in an atlas anyways anyone?

then… DDX doesnt rely on UV, it relies on screen position, it just return the screen position of ANY parameter in the graphics card, of its neighbour pixels, so you can interpolate colors, interpolate uv values, based on a neighbour pixel. i didnt understand exactly the nature of the problem,

im surprised that there isnt a texture atlas shader on unity web to go with their tex packing code?!?!?