Hello everyone!
I have a very simple problem)
It is necessary to have a distance from the camera to each pixel in the pixel shader to change the transparency smoothly.
Why in the pixel? Because the geometry of the object can be changed and it should not affect the effect.
This is shader changes the alpha, it is based on the vertex shader:
ObjSpaceViewDir is going to get the distance from a point to the camera in object space, which if you’ve got a scaling factor on your model importer is going to get you a weird length to that vector. What you really want is the world distance from the camera to the vertex.
For that you just need: o.dist = length(WorldSpaceViewDir(v.vertex));
However using world distance for near fading might still have some odd problems depending on your camera’s near clip distance. Basically you might still see some clipping occur since it’s the distance from the camera to the vertex and the distance from the vertex to the near clip is going to be less, so you might want to put a small bias on your distance, like -0.1.
blogus, thank you for answer. But this is not exactly what I wanted.
This solution changes color depending on the pixel arrangement of the vertices to the camera.
For example there is a plane. If the camera closer to the vertices of the plane, then all is well, the color changes as it should. If you bring the camera to the center of the plane, where there are no vertices, the color will not change (in this case, everything is bad).
Values calculated on each vertex are interpolated across the surface of the triangle for each pixel. So in your case let’s think of just a straight line with two vertices on each end. Both points are 1 unit away from the camera, so the result will be that every pixel will also use the value of 1. It doesn’t matter if the center of that line is 0.05 units away from the camera and the two vertices are on nearly opposite sides of the camera, the distance calculated was 1, so the interpolated value is 1.
So yes, you need to calculate the distance at every pixel. To do that you just need to pass the float3 value from that function from the vertex to the pixel shader, and do the length there, alternatively you can calculate the z depth rather than the distance which will interpolate properly and doesn’t need to do any additional work in the pixel shader.
Thanks very much for this! It definitely seemed to work very well.
For future people coming to this thread, you may wish to adjust the distance slightly. Personally, I used
float dist = 10 - (length(i.viewPos) / 50); //Note: viewPos is the exact same as pos in the example above, however my shader already used the variable pos.
if (dist > 1) dist = 1;
if (dist < 0.0001) dist = 0;
toReturn.a *= dist;
The dist < 0.0001 was to avoid artifacts that seemed to arise with low values on the transparent object I was using.
The best thing about this solution (that I struggled to find elsewhere) Is that it doesn’t rely on the depth texture - as the object I was modifying the shader for was transparent, and so did not write to the depth texture.
In my case I just input the camera global position as a Vector4 using material.SetVector() every frame to get the global camera position into the shader.