I have a simple 3D fps game (think original Wolfenstein and Doom) where the textures are done old-school pixel style, but I find I’m having to save every texture out from Photoshop at a min of say 512x512 (even though the actual texture art for the walls is actually only 32x32) just to stop it looking all aliased when the game is running.
Here’s a couple images of the game:


If I set the textures to Point (no filter) in Unity then they look all jaggy. If I set them to Bilinear or Trilinear in Unity then they look clean, but only if I output them at the much higher resolution from Photoshop first, otherwise Unity’s filters turn them into blurred mess.
So how am I supposed to use pixel art textures as intended, which is naturally drawn at a low resolution (so they’re not taking up a lot of space each) but without any blurring?
When you change their size in photoshop, under image size make sure “Resample” is unchecked, you shouldn’t get any resampling of the pixels upon changing the size that way.
Since bilinear interpolation naturally blends between every pixel across the entire length of those pixels, you’ll need to write a shader which ignores that normal blending.
For this, you’ll want to decide on your basis for calculation:
A) Manually interpolate pixels in a point-filtered texture (which requires no fewer than 4 pixels read), then adapt the blending itself.
B) Manipulate texture reads on an interpolated texture (1 pixel read, but more costly) to make it look like a point-filtered texture, then adapt the position of the read pixel.
This isn’t complete, but should get you started in the right direction for approach B. Using an unlit shader as the baseline to keep it short:
Shader "Unlit/FilterScale"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
_Threshold ("Rounding Threshold", Range(0.0, 1.0)) = 0.5
}
SubShader
{
Tags { "RenderType"="Opaque" }
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
};
sampler2D _MainTex;
float4 _MainTex_ST;
float4 _MainTex_TexelSize;
float _Threshold;
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos (v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
return o;
}
fixed4 frag (v2f i) : SV_Target
{
// Half-pixel offset to grab the center of each pixel to disguise bilinear-filtered textures as point-sampled
float2 halfPixel = _MainTex_TexelSize.xy * 0.5;
// Location (0-1) on a given pixel for the current sample
float2 pixelPos = frac(i.uv * _MainTex_TexelSize.zw);
// sub-pixel position transformed into (-1 to 1) range from pixel center
pixelPos = pixelPos * 2.0 - 1.0;
float2 scale;
float2 absPixelPos = abs(pixelPos);
// If sub-pixel position is near an edge (_Threshold), use point-filtering (scale = 0)
// Otherwise, use an analog dead zone calculation to approximate blending
// (note: can be improved)
// http://www.third-helix.com/2013/04/12/doing-thumbstick-dead-zones-right.html
if(absPixelPos.x < _Threshold)
{
scale.x = 0.0;
}
else
{
scale.x = (absPixelPos.x - _Threshold) / (1.0 - _Threshold);
}
if (absPixelPos.y < _Threshold)
{
scale.y = 0.0;
}
else
{
scale.y = (absPixelPos.y - _Threshold) / (1.0 - _Threshold);
}
// Calculate the new real UV coordinate by blending between the center of the current pixel and the original sample position per axis
float2 uvCoord;
uvCoord.x = lerp(floor(i.uv.x * _MainTex_TexelSize.z) * _MainTex_TexelSize.x + halfPixel.x, i.uv.x, scale.x);
uvCoord.y = lerp(floor(i.uv.y * _MainTex_TexelSize.w) * _MainTex_TexelSize.y + halfPixel.y, i.uv.y, scale.y);
float4 col = tex2D(_MainTex, uvCoord);
return col;
}
ENDCG
}
}
}
I apologize that it’s a little sloppy (I got the algorithm close enough to function well, but can’t remember what I’m missing to make it just a little cleaner).
I understand your question and why it looks good when filtered biliniear when the texture is saved in a much larger scale in photoshop. I’m looking for the same thing
Have you found an easy solution yet?
Some random thoughts on this subject.
What you would like to have as a result is comparable with Antialiasing which is used to make geometry borders look more smoothly. But instead of cleaning up geometry borders you would like to perform this on textures between pixels. Technically I could understand why this is more gpu consuming then just using high res textures as a source. Maybe the textures could be upscaled when the game starts somehow. Or maybe render the whole game twice as high resolution as being displayed on screen. This is no way gpu friendly. I think you also get in trouble with mipmaps when using very low resolution textures.
So that are my random thought, I just hope like you there is an easy solution.