Deforming camera output for projection mapping

Hi there! I’m using Unity in a theater space with a projector, and want to adjust the camera output to squash, stretch, or generally deform in various ways to match the real-world objects it’s being projected onto.

I know I can adjust the camera’s projection matrix, which has been helpful for setting an off-center projection and making other similar adjustments, but (as far as I can tell) that’s only helpful for scaling the entire image size vertically or horizontally.

My current way of solving this issue is as follows:

  • Have a camera output to a RenderTexture
  • Apply a material with that RenderTexture to a ProBuilder plane
  • Make cuts in the plane so it has separate sections for where the real-world objects are
  • Use the ProBuilder UV editor to resize the sections to get the desired amount of squash or stretch
  • Point an orthographic camera at the plane, and that’s your final output!

Here’s an example of the plane with the render texture applied:

And here’s the final output, along with a template showing the physical world it matches up with:

My main problem with this method is that because it’s going through these extra steps, the final output isn’t quite as good. In my actual scenario, the particles are much smaller than those in the images above (often less than one pixel), and the buttery smooth output that subpixel anti-aliasing provides becomes flickery again when I need to use a separate camera to view the render texture.

That said, rendering out to a texture larger than the output (mine is 5760x3600) makes it so there isn’t really any quality loss, and setting the Render Scale to 2 in the Quality settings of my Universal Render Pipeline Asset pretty much removes the flickering entirely (I discovered that while writing this post!) so this is definitely a workable, if clunky, solution.

My question is this: is this the best way to do this? Are there any alternative strategies that don’t require two cameras and a render texture?