_ScaledScreenParameters and render target subregion

Hi,

Is there a reliable way to get the size of the sub region rendered to when the render pipeline asset’s RenderScale != 1? There is a _ScaledScreenParams global variable which gets set, but unfortunately it is often not integer valued. For instance, with a resolution of 1741x980 and a render scale of 0.77, the value it gives is (1340.57, 754.6, 1.000746, 1.001325).

The first two appear to be a multiplication of the resolution by the render scale. However, the size of the subregion of the render target is integer sized. Neither round, floor or ceil will reliably give the subregion size on all platforms/APIs.

Thanks,
Elliot

To provide some more data on this, I’ve taken a series of measurements at different render scales. I use a viewport size of 1246x600, and render scales of 0.75, 0.8,0.85, 0.9. I compare the output of floor, ceil and round on RenderScale * viewport size:

  • RenderScale 0.9: 1121x539. floor/round give correct size for width, none give correct size for height.
  • RenderScale 0.85: 1059x510. floor/round correct for width, all correct for height.
  • RenderScale 0.8: 996x480. floor correct for width, all correct for height.
  • RenderScale 0.75: 934x450. floor correct for width, all correct for height.

In all cases the _ScaledScreenParams.xy is equal to scale * viewport size. The width is always given by floor(viewport width * scale), but the height is not reliably given by floor, ceil or round. I tried a few other functions based on maintaining aspect ratio, but can’t see the pattern there either.

For others who come across this: I couldn’t find a reliable way in the shader to predict the size of the used render target area when RenderScale != 1. Instead, I decided to use a custom pass to set a global vector to store the size of the cameraTextureDescriptor.width and height.

2 Likes