No, all interpolated values get perspective correct barycentric interpolation by default, except SV_POSITION
.
As an example of what I mean by perspective correct barycentric interpolation, here are quad meshes (two triangles) with the same UVs and texture being displayed in three slightly different ways.

On the left is the quad rendered facing the camera normally. On the right is it rotated 45 degrees away from the camera and scaled so itâs the same screen space height as the first.
In the middle is⌠well, it could be the two triangle quad with the top two vertices moved closer together, or it could be the rotated and scaled quad with perspective correct barycentric interpolation disabled. They look exactly the same so itâs actually impossible to know from that image which one it is!
In this case it happens to be the same rotated and scaled quad as on the right, but with perspective correction disabled. But my main point is the per vertex UV data in all 3 of these examples is exactly the same and is unmodified by the GPU. The bottom left is (0,0), top right is (1,1), etc. The only difference between these is how the data is interpolated across the triangle. Itâs always barycentric interpolation, ie: 3 point interpolation, because theyâre triangles. But the interpolated data can either get perspective correction or not. If the middle one happened to be a quad with the top two vertices moved closer together, it would look the same with or without perspective correction since all 4 vertices are the same depth, so thereâs no perspective to correct for. The left looks the same in both cases as well for the same reason.
Now, letâs do one more test. Letâs use screen space UVs.

Similar setup, but now instead of using the mesh UV, weâre using the screen space positions (this is using the built in renderer, so itâs using the values from ComputeScreenPos()
, but itâs the same as if I was using positionNDC
. The middle and right examples are both geometry thatâs been rotated 45 degrees away, but in the middle thatâs doing the divide by w in the vertex shader, and the right is doing the divide by w in the fragment shader.
The important question is why do we need to do the divide by w at all? Itâs to undo perspective correction. In fact, the right is also what doing the divide by w in the vertex shader and disabling perspective correction looks like. Or what a mesh with the top two vertices moved closer together would look like either way. The left looks the same either way because again, thereâs no perspective correction to do.
SV_POSITION
is a different beast, because the data assigned in the vertex shader stage is not the data that the fragment shader gets. In the vertex shader you set the homogeneous clip space position for that vertex, and in the fragment shader you get the xy
pixel position, z
z depth, and w
world depth (or 1 for ortho). Actually, the w is the only value from the homogeneous clip space position that remains untouched.
As for why your positionCS2
use isnât working and positionNDC
is⌠in what way isnât it working? Technically those two donât quite match, but they should be very similar. As long as youâre setting positionCS2 = positionCS
and passing the full float4
, you should get plausible screen UVs with that code. They might be flipped upside down in some situations compared to positionNDC
, but it should work.