So I have no problem rotating a texture in the fragment shader based on data from another source texture.
The problem is, it’s a 256 data texture mapped to 1mx1m on the actual mesh. So as you rotate the texture there’s big seams? where each 1m square is now another angle.
Now if I limit the angles to 15deg increments its not quite so bad because more 1m square will have the same angle. If you are wondering, this is a field and I am planting it and turning.
So I would love to somehow blend each 1m square into each other. I think if I could so some fancy uv offsets so that the lines in the texture still line up with each other after rotation that would possibly help.
I am not sure what I can do to blend it where it meets at a 45deg+ angle though, it’s just sharp.
This is a really difficult problem to solve. Not the specific case you have, but having complex UVs like this and getting the edges to line up. Unless it’s done manually or with a significant amount of iterative processing, it’s basically impossible.
The usual solution to this is don’t do it.
A more complex solution which may yield acceptable results is to do a blend between each tile. To do that you would need to calculate the rotation for each tile, and the three other tiles closest to that pixel around the edge, and blend between all 3 as you go between them.
See I had a plowed texture and a planted texture as well. It was hard to make those look like they were from the same field. So I realized, what if I just scale the UVs of the plowed texture so it makes the furrows closer together/smaller. It should blend color wise really well, and it had the added affect of looking nicer when rotating compared to my first picture.
bgolus, are my 4 texture samples in the fragment shader to do filtering more expensive than using the bilinear filtering on the texture settings? Or is the GPU doing the same thing. I almost figured that it was doing something a little more streamlined if it was set in the texture settings. The reason I have to use point filtering, is my rgba texture I am updating from a script is basically an rgb splat with the alpha channel being the rotation angle. And filtering really screws up the angle data along changes. So that is why I just decided to do the sampling/filtering of just the rgb channels in the frag shader. Thoughts on that?
Thanks for your thoughts, they are worth their characters in gold
Awesome, what is the reasoning for you saying that pixel proximity affects performance. Isn’t it just another address for looking up pixel color on a texture. When sampling does it have a small cache for the recently sampled pixels that happen to be nearby or something?
Also, I just realized that a lot of Mali gpus like on my S6 don’t even have the aniso extension which just blows my mind. Would it be insane to try and implement aniso in the fragment shader? Without the aniso extension, is it even doing bilinear sampling, surely?
My understanding is yes, each sampler has a small amount of cache. So once it’s sampled a texture, sampling the same texture again within a small area (no more than a pixel or maybe two) is a bit cheaper. Sampling outside of that range is going to be slower. It’s a bit of a toss up as to which will be faster between sampling one texture four times within a small region vs. sampling the same texture using multiple texture samplers (ie: have 4 texture properties of the same texture and sample only once from each). For larger areas the 4 samplers will be faster, and on Mali it might be faster even in a small region.
And yes, I seem to remember Mali didn’t even have Trilinear for a while. It is absolutely doing Bilinear at the very least.
It is theoretically possible to do some better filtering (have to be better than bilinear already done) in the fragment shader on devices that don’t support higher level bilinear or trilinear filtering? I’m guessing so, but it would be insanely expensive compared to it being implemented in the driver?
Yes. Anisotropic texture filtering is slow to implement in the shader. More over, no one actually implements true aniso in the hardware, and the approximations and optimizations GPU manufacturers use aren’t very shader code friendly, especially on mobile platforms. Lots of branching and variable loop lengths.
However you could implement something like a traditional 3 tap anisotropic filtering in the shader for not a lot of cost.