I’m using a “filmstrip” type of texture where I have 64 frames, each 64 pixels wide, in a long texture that I move in the x dimension to simulate animation. So my total texture is 4096 pixels wide. My tiling parameter is 1/64 = .015625. When I enter this in the inspector, it rounds to .016. Thus I’m losing precision since 4096 * .016 = 65.536, and I’m seeing artifacts.
Is there a way to specify the number of decimal places in the inspector? This limitation doesn’t exist in the specification of transform coordinates, so why does it round when using texture parameters? If I can’t do this in the inspector, can I do it through code?
Thanks,
Brian
You’re going to have problems with a number of graphics cards not going over 2048 anyway. Probably better off making a square of 8x8 instead of a strip of 1x64.
–Eric
That’s a good idea to go with an 8 x 8 square. That way 1/8 = .125 which doesn’t lose any precision in 3 places.
But still, it doesn’t answer the question of why I can’t use 1/64. Even with a texture that is 1024 wide, I should be able to split it equally into 64 parts, each 16 wide, right? But if I try to set the horiz tiling to 1/64, it rounds and I lose precision. Is there no workaround or am I missing something about graphic cards?
– Brian
That’s just a limitation of the editor. Using SetTextureScale from script doesn’t have any limitations aside from the usual 32-bit float ones.
–Eric
Does it matter when I use SetTextureScale? Does it matter if it is used in Awake() or Start(), or can it be used at any time? I’m thinking to use it in Start() so my scale is at least active before anything is visible.
Thanks,
Brian