# Texture rendering artefacts on IPad 3rd Gen, IPad 4th Gen, IPhone 4

We experience some strange rendering artefacts inside textures, when using shaders.

Our setup is the following:

1 Quad, with the material texture assigned the shader. The perspective camera is looking at this quad.

The texture is 512x1024, Advanced, Alpha from GS, Bypass sRGB, Clamp, Bilinear, MaxSize: 1024, Format: Alpha 8

The texture contains 16bit data (low bytes in 0…511 y and high bytes in 512…1024). To reconstruct the value in the shader, I do
2 lookups

float4 h = tex2D (_MainTex, i.uv);
float4 l = tex2D (_MainTex, i.uv+float2(0.0,0.5));

reconstruct the value

float value = (h.a*255+l.a)*255;

and scale the value based on a function back to the 0…1 range.

The problem is that on the above mentioned devices we can’t reconstruct the correct value. IPad 2nd Gen IPad Air works!
It seems as high bytes are in a very low range 0…8 bilinear interpolation removes/filters? this values out like a low-pass filter, though the Alpha 8 format should not be compressed. We can compensate for this problem by scaling the high byte range e.g. 0…8 to 0…255 and
scaling it back after the texture lookup but this is not really solving the problem.
If we change from bilinear to point interpolation, we get line artefacts in-between pixels! Either white or black.

This image shows the remaining artefacts with compensation.

Any clue what could be the problem?

Wouldn’t you have uv in the 0…1 range? So you’d divide v by 2, to do the first lookup, then add a half to do the second.

I don’t understand that. Wouldn’t you multiply by 256 inside the brackets?

The uv’s are in the 0…1 range. This is done in the vertex shader:

v2f vert (appdata_img v) {
v2f o;
o.pos = mul( UNITY_MATRIX_MVP, v.vertex );
o.uv = v.texcoord.xy * float2(1.0,0.5); // stretch by 2 in y
return o;
}

You are right for the second remark. The line should say:

float value = (h.a*256+l.a)*255;

But this has no influence on the mentioned artefacts

Not sure if this is relevant but the devices it isn’t working on use different GPUs from the ones that it does work on. From the Apple docs:

“Medium- and low-precision floating-point shader values are computed identically, as 16-bit floating point values. This is a change from the PowerVR SGX hardware, which used 10-bit fixed-point format for low-precision values. If your shaders use low-precision floating point variables and you also support the PowerVR SGX hardware, you must test your shaders on both GPUs.”

The ones it seems to work on are A7 GPU (described above) - the ones it doesn’t are using the SGX 10-bit formatting. I would have thought that it would work ok though since youre using float4 and not half4 etc…

Cheers,T

p.s. I know this is unlikely but it also looks a bit like it’s not using the 32-bit display buffer - might be worth double checking

Thanks for your suggestions. Unfortunately disabling/enabling the 32 bit display buffer has no visible effect on any of the devices.

Dual-Core PowerVR SGX543MP2 GPU - IPad2 - works

PowerVR SGX535 GPU - IPhone 4 - doesn’t work

PowerVR G6430 - IPad Air - works

I’m using float/float4 for all values. Additionally I define #pragma fragmentoption ARB_precision_hint_nicest. Though the information loss seems to come from the interpolation routine.

That’s all pretty strange, if you feel like posting a copy of the compiled shader I’ll have a look and see if I can spot anything odd. (The one you get when you click ‘open compiled shader’ in Unity

1590680–95454–\$UnityDicom.zip (654 KB)

I uploaded a small complete Unity Project demonstrating the issue, including the shader. Artefacts are not visible in Unity only on the device…

By looking at the compiled shader I see that unity introduces some temp variables with low precision and assign this later to high precision variables. Strange?!

Did the shader in GLSL defining all variables highp. Still no change.

Thanks, not sure I’ll be able to look at it quickly but will let you know if I spot anything.

Hi,

I noticed that changing ARB_precision_hint_nicest to ARB_precision_hint_fastest made no difference - it was still using low precision code in the compiled shader. By changing the sampler to high precision it seems to change the compiled code a fair bit so might be worth trying this:

uniform sampler2D_float _MainTex;

uniform sampler2D _MainTex;

Cheers,T

Hi,

Thanks for your effort! I was testing it and still see the same artefacts despite the fact that in the compiled shader lowp changed to highp.

With artefact:

That’s strange, only thing I can think it’s that the limitations of floating point are reducing the accuracy of your values somewhere but it doesn’t really explain why it works on certain devices.

You could always convert the texture manually in case Unity is reducing the quality somehow. You can use pvrtextool to do this and use the pvr directly in Unity. It could also be some difference between opengl 2/3 so might be worth locking it to opengl 2 for testing.